title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards
Accept (poster)
Summary: In this paper, the author analyzes the MED algorithm proposed by Honda \& Takemura (2011) for Bernoulli distributions in the context of general bounded distributions, and under the name KL-Maillard Sampling. This work is a follow-up of a previous work that proposed Maillard Sampling for sub-gaussian distributions. KL-MS is a bandit algorithm that samples an arm at each time step with probability $p(t) \propto \exp(-N_a(t) \text{kl}(\mu_{t-1,a}, \mu_{t-1, \text{max}}))$, where kl is the KL divergence corresponding to Bernoulli distributions. The interest of this simple strategy is that one can explicitly compute the probability to pull each arm, which is useful for instance in the context of off-policy evaluation. Contrarily to the initial work of Honda & Takemura that focused on instance-dependent bounds, and in the spirit of the previous MS paper, the authors provide both optimal instance-dependent and minimax bounds for the algorithm for Bernoulli distributions. The same guarantees naturally hold for general bounded distributions, losing the optimality for problem-dependent guarantees compared to the original MED algorithm using the ``tight'' divergence. It is also proved that the worst-case guarantees scale in the standard deviation of the best arm, which is on par with what is known from the sub-gaussian case. Strengths: * The paper is well-written, clear, and easy to follow. Furthermore, it seems technically sound, and the proofs are carefully detailed. The literature review is well-covered regarding the scope of the paper. * The analysis of asymptotic optimality of MED for the Bernoulli case is largely simplified compared to the original proof of Honda \& Takemura (2011). Furthermore, the worst case optimality is a novel result compared to this work. Compared to the previous MS paper, it is also interesting that the optimal minimax ratio is achieved without tweaking the algorithm. To prove these results, the the authors perfectly used the analysis tricks introduced recently in the TS literature (e.g all the cited works from Jin et al.). * The main novel element/insight of the paper compared to its two major inspirations is the refined analysis of the "under-exploration" term (F3) in the regret analysis, that lead to the $\log(K)$ vs. $\log(T)$ improvement of the minimax ratio of MS without having to change the algorithm, making MS+ obsolete. The changes in the proof for this are rather substantial so the contribution is valuable. * Maybe the most surprising result in the paper is the minimax bound scaling with the variance term $\mu_1(1-\mu_1)$. Therefore I would have appreciated some explanation in the main text as to where Theorem 4 comes from. If I understand correctly, it follows from a tighter version of Pinsker's inequality (Lemma 28) which is worth highlighting. While interesting, this trick could certainly be applied to the analysis of other bandit algorithms (as done for KL-UCB in an Appendix), so the result cannot really be interpreted as an indicator of the superior performance of KL-MS. Weaknesses: * I am a bit uncomfortable with the re-branding of MED as KL-MS. This was understandable for the initial MS paper since Honda & Takemura tackled bounded distributions, but here the algorithm exactly matches MED for Bernoulli distributions. Furthermore, it is folklore that for algorithms using divergences the Bernoulli divergence can be used for general bounded distributions. Hence, there is no real reason for this re-branding in my opinion. However, I insist on the fact that I find the novel elements of analysis intereting. * Regarding the analysis, it seems that it differs from the original paper only from term (F3). The way the authors handle this term is very interesting and a valuable contribution in itself, but this should be better highlighted in the paper. However, even this part seems largely inspired by recent papers from Jian et al. for the analysis of Thompson Sampling, so I wonder if there is a really novel theoretical contribution in the paper. * Minor: the authors should cite a recent follow-up of Honda and co-authors on the MED algorithm: https://arxiv.org/abs/2303.06058 This do not alter the contribution of this paper since the authors focus on problem-dependent guarantees of MED for a broader class of distributions, but answer some of the questions presented in the conclusion of the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I don't have specific questions for this work, everything seems clear to me. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our work and provide valuable feedback thoroughly. *(1) The shared common mechanism with MED and KL-MS.* We agree with the reviewer that under Bernoulli environments, our algorithm and MED are identical. The main reason we see our algorithm as a 'Maillard sampling'-style algorithm is that its action probabilities are calculated only based on empirical mean rewards of all arms. In contrast, MED algorithms and their variants compute action probabilities based on a notion of 'empirical divergence' that relies on the full empirical distribution of rewards. *(2) The novelty of our paper should be appropriately highlighted.* We include the most challenging part in Appendix D.3.1, "Roadmap of analysis of F3," and highlight some critical techniques that can be of independent interest. For example, extending MS to the KL version is not trivial and requires much work. Also, choosing a separating point in F3 takes a lot of work to find. Please also see our global response for a recap of our analysis and technical highlights. *(3) Reference [1]* Thank you for the reference. Indeed, [1] gives randomized and off-policy-amenable algorithms that achieve asymptotic optimality for unbounded rewards, which answers our question in lines 298-300. However, it does not (yet) yield easily interpretable finite-time regret bounds, which we believe is an important research direction. [1] D. Baudry, K. Suzuki, and J. Honda. A general recipe for the analysis of randomized multi-armed bandit algorithms, 2023. --- Rebuttal Comment 1.1: Title: post-rebuttal comment Comment: Thank you for your response. Along with the other reviews, it confirms my positive evaluation of the paper. (1) I see, so for you the terminology "Maillard Sampling" may refer to algorithms that are easier-to-compute (but sometimes sub-optimal) proxies of MED in some sense? (2) I see. I still believe hat most of the difficulty in handling this term has been addressed in other works (of Jin et al.), but due to the technicality of the arguments this is not a limitation of the paper and I agree that the proof must have required some work. (3) I agree with your comment. This is related to your discussion with reviewer Gstc, but for the bounded case I believe that there is indeed some work to adapt your analysis to MED, because for instance the existing concentration inequality on KL-inf scales as (n x exp(-n ..)) (while for Bernoulli kl we have $\exp(-n...)$), and this multiplicative n itself would worsens the worst-case bound. Obtaining tighter concentration may be challenging, and it is not even clear that this is possible. Hence, the MS framework has an interest in the sense that it makes worst-case analysis easier with the tools developed by JIn et al. for exponential families, since it only requires to concentrate empirical means. --- Reply to Comment 1.1.1: Comment: (1) Yes, that was our intention. We also see your point that, if we go with a generous interpretation of MED in the sense of [Baudry, Suzuki, and Honda, 2023, Eq. (4)], By choosing $D_\pi(F_k(t), \mu^\star(t))$ to be $\mathsf{kl}(\mu_k(t), \mu^\star)$, MED specializes to our KL-MS. (2, 3) We wish to point out that only a small part of our proof is inspired by [Jin et al. 2022] (specifically, our application of Lemma 25 in bounding $F3_1$ is inspired by their usage of Lemma A.4 to prove Lemma A.1, which is for refining the minimax ratio from $\sqrt{ \dot\mu_1 \ln T }$ to $\sqrt{\dot \mu_1 \ln K}$; even that time-uniform concentration inequality of empirical rewards was originally due to [Menard and Garivier, 2017], to the best of our knowledge). For the high-level case splits on bounding $\mathbb{E}[N_{T, a}]$, [Jin et al. 2022] uses a standard split in frequentist analysis of Thompson Sampling [e.g. Agrawal and Goyal, 2017, Eq. (2)], depending on whether the posterior sample of arm i exceeds $\mu_1 - \varepsilon$; In contrast, our split of F1, F2, F3 is similar to (and perhaps simplifies) the analysis of MS [Maillard, 2013, Bian and Jun, 2022] and MED [Honda and Takemura, 2010]. (3) We acknowledge the reviewer’s finding and appreciate the explanation.
Summary: This paper considers a classic bandit problem, where the algorithm should explicitly output the random distribution of the next pulling arm (as a comparason, in classic case, the algortihm only needs to generate one arm from this random distribution and outputs that arm). Existing results only work on the case that the random rewards are unbounded and subgaussian. In this paper, the authors extend the existing works to the case that the rewards are bounded in $[0,1]$, design a KL-MS algorithm (using a KL-divergence approach). They show that the regret upper bound of KL-MS is near optimal. They also use some experiments to show that the performance of KL-MS (in outputing the random distribution precisely) outperforms existing baselines. Strengths: The regret bound in this paper is nearly tight. The writting is clear for me to understand. Weaknesses: My first concern is about the model setting, i.e., why we require to know the exact random distribution of pulling the arm? Though there is an example of estimating the average reward, I do not think this is a well-motivated one. Can you provide more examples about the why we require that distribution in reality? Besides, why not just use a UCB-type algorithm, which can easily give you the exact probability distribution, as long as tight analysis? I know that there are works, e.g., MOSS, to achieve the tight $O(\sqrt{KT})$ regret upper bound, but not very sure whether there are UCB algorithms that achieve $O(\sqrt{\mu(1-\mu)KT})$ regret upper bound (though I think after using some variance-based concentration, the steps are straightforward). My second question is about the experiments of this paper (in appendix H). I do not see any comparason about the regrets between different algorithms, and I am wondering the regret performances of KL-MS. Finally, do you think the idea (of either MS or KL-MS) could be applied to infinite-arm case (e.g., linear bandits), for example, return a distribution supported on an infinite set? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the above "Weaknesses" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our work and provide valuable feedback thoroughly. *(1) Why do we need the exact action distribution in reality?* Our motivation comes from the broad field of off-policy evaluation and optimization for contextual bandits and reinforcement learning, where learners use previously-collected logged data to make inferences about the unknown environment [1]. For example, in an online advertisements recommendation system, before deploying a new policy, the platform would like to evaluate its performance using historical data (collected by previous policies), possibly for safety considerations. As we demonstrated in the experiments, Thompson Sampling, albeit having good regret performance, when combined with Monte Carlo estimation of action probabilities, yields logged data that produces reward estimates that are less reliable than that of KL-MS (which instead maintains closed-form action probabilities). *(2) Regret comparison between algorithms.* Since we focus on the performance of offline policy evaluation, we put the most essential plots in Appendix H. We now include two experimental evaluations on the comparison between KL-MS, MS and Thompson Sampling in the global response for your reference. *(3) Generalizing KL-MS to an infinite-arm case.* We agree that this is an interesting topic and leave it for future work. [1] Saito, Y., Udagawa, T., Kiyohara, H., Mogi, K., Narita, Y., & Tateno, K. (2021, September). Evaluating the robustness of off-policy evaluation. In Proceedings of the 15th ACM Conference on Recommender Systems (pp. 114-123). --- Rebuttal Comment 1.1: Title: Thank you Comment: Thanks for your reply. For (1), I am still wondering whether we can use UCB method to achieve the same goal. Can you give me some insights about why UCB-based policies do not work in your example? --- Reply to Comment 1.1.1: Comment: Note that deterministic exploration algorithms such as UCB generates logged data that cannot be reliably combined with the IPW estimator for offline evaluation. More precisely, logged data is generated by a UCB-based policy with $p_{t,I_t}=1$ for all t. Considering the offline evaluation setup in Appendix I, the IPW estimator will be $\hat\mu := \sum_{t=1}^T \frac{r_t}{K T}$. However, such an IPW estimator is biased with respect to the estimation target $\mu = \frac{1}{K} \sum_{i=1}^K \mu_i$. Consider a UCB-type algorithm, the fraction of the optimal arm in the historical arm pull $N_{T,1}/T $ will go $1$, therefore when we let $T \rightarrow \infty$, $\hat\mu = \sum_{t=1}^T \frac{r_t}{K T} \rightarrow \sum_{t=1}^T \frac{\mu_1}{K T}=\frac{r_1}{K}$ which is not equal to $\mu$.
Summary: The submission considers the vanilla setting of stochastic K-armed bandits and studies a strategy introduced by Maillard (2013), which relies on exponential weights and outputs at each round probabilities of taking each action. This is often convenient in offline policy evaluation, when estimates based on inverse propensity weighting [IPW] are constructed. Distribution-dependent and distribution-free regret bounds are provided, either in a general non-parametric model of all probability distributions over [0,1], or in the much specific model of Bernoulli distributions. The general distribution-dependent regret bounds asymptotically match the gap-based bounds of UCB (Theorem 1 and Remark 2) and is actually optimal in the Bernoulli model (Theorem 5). The general distribution-free bound improves on the one for UCB by featuring a \sqrt{\mu^\star (1-\mu^\star)} term (Theorem 3). Another main result is formed by Figure 1 and Table 1: there are actually few randomized strategies (Thompson sampling, MED) and none of them exhibits closed-form expressions for probabilities of plays. This shows that the core result of this article is: a strategy for the vanilla case of stochastic K-armed bandits, with decent (though not optimal) distribution-dependent and distribution-free regret bounds, and based on determining actual probabilities of playing arms, which is useful for offline policy evaluation. Strengths: The idea of constraining the strategy to output probability distributions while getting decent bounds is nice and may turn useful---I have witnessed several recent articles critically using IPW in bandit contexts. By 'decent bounds', I mean bounds that are as good as, or slightly better than, UCB, but not optimal (as IMED and recent versions of KL-UCB achieve). The exposition is clear and I enjoyed reading the main body of the submission. Weaknesses: 1. Exponential weights are actually difficult to compute in practice with a good accuracy, at least for suboptimal arms that are played often. Perhaps this case does not arise (suboptimal arms are played only logarithmically many times and the probabilities are easy to compute with a good accuracy), but the accuracy in the computation should be commented, especially given the critiques against Thompson sampling on these issues on page 2. 2. The comparison to previous works could be clarified and reorganized in pages 4--5. In particular, it would have been better to recall first the typical (e.g., for UCB) as well as the optimal distribution-dependent and distribution-free regret bounds, in the Bernoulli model and in the model P(0,1) of all distributions over [0,1], when no constraint of outputting probability distributions is imposed. The sub-UCB criterion could be omitted, I don't think it adds anything. The literature review seems a bit outdated. In particular, a new reference is critically missing: https://www.jmlr.org/papers/volume23/20-717/20-717.pdf / it shows that in the model P(0,1) there exists a strategy called KL-UCB-switch that achieves simultaneously the optimal distribution-dependent and the optimal distribution-free bounds. 3. The regret bounds are unprecise: (i) they involve O(...) terms and (ii) even the main terms are difficult to read because of +/- c \Delta_a terms in the kl, (iii) not mentioning the additive T \Delta term, where \Delta is a parameter that must then be << \ln T / T and therefore should vanish. Even worse, Lemma 9 and Theorem 5 are proved by taking \Delta = 0 in the bound of Theorem 1, while Theorem 1 assumes \Delta > 0. 4. The proof sketches are too vague: pages 8-9 merely indicate a proof structure in terms of decompositions of events and other immediate considerations, but the actual boundings of the probabilities of interest is not explained in the main body (but is detailed in the extremely long appendix). At least two or three salient (new?) ingredients of these proofs should have been given in the main body. What I read on pages 8-9 is too high level and actually takes almost 2 pages without learning anything specific to the reader. I regretfully couldn't check the proofs and get a sense of their correctness, but I don't feel guilty for this, as nothing or almost helped me doing this in the main body. Better editorial choices could have been made as far as proof sketches are concerned. Other comments / remarks / typos along the text: - Lines 9-10: I wouldn't insist on the distribution-dependent optimality for Bernoulli distributions (which is a minor point) but rather on getting decent (better than UCB) bounds - Line 28: also Garivier and Cappé 2011 - Footnote 1: yes, but for known L and U - Caption of Figure 1: difficult to understand in itself, I have to read lines 770--773 in appendix to understand (these explanations should thus be moved in the main body) - Line 48: the empirical averages \mu_{t,a} were not formally defined - Lines 54-55: sounds like an overstatement; the submission proposes decent but not optimal distribution-dependent and distribution-free regret bounds - Table 1: Tsallis-INF is among the few strategies that output distributions, so it should not have been excluded on the ground of a minor issue given its non-optimal bound for Bernoulli distribution; this Table 1 is generally not helpful given that it contains too many strategies that do not output distributions - Lines 65-67: I would be less enthusiastic; getting rid of the \sqrt{\ln K} is uneasy but was achieved for some algorithms that are optimal from the distribution-dependent viewpoint (see https://www.jmlr.org/papers/volume23/20-717/20-717.pdf) though I appreciate as well the \sqrt{\mu^*(1-\mu^*)} term, which indeed may be smaller than \sqrt{\ln K} in some situations - Line 69: drop 'with' - Line 82: OK for this definition in case of absolute continuity, otherwise, = +\infty - Lines 108-109: syntax issue - Line 119: > b, not \geq b; this quantity is called K_inf - Lines 154-155: some strategies like KL-UCB-switch (see https://www.jmlr.org/papers/volume23/20-717/20-717.pdf) would be sub-UCB and enjoy a \sqrt{KT} distribution-free regret bound---I thus disagree with the statement made here - Lines 205-206: same kind of comments - Lines 239-241: there is no such issue if the Bernoullization trick of lines 115-119 is implemented - Line 258: rather (6) instead of (8) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would like to read authors' opinion on the four main weaknesses that I raised. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: (They are well-addressed in the conclusion, Section 6, and include extensions of the optimality results of Maillard sampling to general exponential families and even to the non-parametric setting of all distributions over [0,1].) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our work and provide valuable feedback thoroughly. We genuinely appreciate reviewers carefully examining our results and offering insightful comments to improve the quality of our research. We have carefully considered each of the weaknesses raised by the reviewer and committed to making the necessary revisions to solidify our arguments. Allow us to address each of the specific points the reviewer raised. *(1) The accuracy of computation of action probabilities of KL-MS and comparison against Thompson sampling.* If we understand your concern, 'Exponential weights are actually difficult to compute in practice with a good accuracy' correctly, the imprecision of the exponential weight calculation comes from floating-point number underflow when evaluating the exponential function. This numerical imprecision is insignificant because it only affects the action probabilities by a minuscule additive factor of the smallest float value in the computer. Also, if we consider the inconsistency brought by using the log data generated by KL-MS and TS, using TS inevitably introduces bias since the agent cannot access the actual generation probability, while in the KL-MS, it is not. Please let us know if we have missed anything (we would appreciate it if you could elaborate on your concerns by, e.g., giving some references). *(2) The sub-UCB criterion could be omitted because it does not add anything.* Our paper focuses on a finite-time (i.e. nonasymptotic) analysis, and the sub-UCB criterion _does_ play an essential role in this view. More precisely, any algorithm satisfying the sub-UCB criterion guarantees not to suffer a higher regret order than UCB-like algorithms at any time. A typical example that illustrates the relevance of the sub-UCB property is in [1]’s section "Failure of MOSS", where it gives a bandit instance of K arms such that any sub-UCB algorithm has a regret at most O(K \ln K), while in contrast, MOSS, a non-sub-UCB algorithm, has a regret of at least Omega(K^2), which is significantly larger when K is large. As another example, lines 148-150 (see also footnote 4) show that MED's best-known regret guarantee does not imply that MED is sub-UCB. Finally, recall from Table 1 that not all algorithms are sub-UCB. *(3) The comparison to previous works could be clarified and reorganized in pages 4--5.* The current organization of the previous works is based on different criteria an algorithm satisfies. Thank you for your suggestion, we will add a discussion on the earlier UCB algorithm. Thank you for your reference on KL-UCB Switch. We will also add relevant discussions in the final version. *(4) Imprecise regret bound in Theorem 1.* We appreciate the reviewer's careful reading. * We had a typo in theorem 1 (and Lemma 8): '$\Delta > 0$' should instead be '$\Delta \geq 0$' (note that the proof of Theorem 1 continues to hold when $\Delta = 0$, specifically the equation display in line 439). * The + / - $\Delta_a$ terms in the main term is standard in the analysis of Bernoulli bandits, similar to the $\varepsilon_1$, $\varepsilon_2$ factors in standard KL-UCB analysis [Lattimore and Szepesvari, Bandit Algorithms, Theorem 10.6]. * Note that in the downstream applications of Theorem 1, we do not always choose $\Delta \ll \ln T / T$ (although $\Delta \ll \ln T / T$ makes sense for showing asymptotic $\ln T$-style regret bounds): for example, in the proof of worst-case regret bound Theorem 3, we chose $\Delta = \sqrt{\dot\mu_1 K \ln K / T}$. * Throughout this paper, big O only hides absolute constants. The main reason we present KL-MS's regret bound in this form is its flexibility in deriving asymptotic and nonasymptotic regret guarantees: to derive asymptotic guarantees, we choose $\Delta = 0$ and treat the terms inside the big-O as lower-order terms; to derive nonasymptotic properties such as sub-UCB or worst-case regret guarantees, we are generous in giving up constant factors and apply Lemma 8. We will correct the typos and add these clarifications in the final version. *(5) the proof sketches are too vague.* We refer the reviewer to the global response for a recap of our proof sketches and technical highlights. *(6) Minor comments.* Thank you for these; we will make a pass over our paper to incorporate them. * Reorganizing Table 1: if we include Tsallis-INF, it also seems to make sense to include other randomized exploration algorithms, such as: EXP3, EXP3-IX, and the Boltzmann-Gumbel exploration algorithm [5] (and many others), which, although can maintain closed-form action distributions, do not have sharp regret guarantees in the stochastic setting, such as asymptotic optimality in Bernoulli case and sub-UCB. Nevertheless, we are open to any suggestions that can help elucidate comparisons between our algorithm and previous works. [1] T. Lattimore. Refining the confidence level for optimistic bandit strategies. Journal of Machine Learning Research, 19(20):1–32, 2018. URL http://jmlr.org/papers/v19/17-513.html [2] Agrawal, S., & Goyal, N. (2017). Near-optimal regret bounds for thompson sampling. Journal of the ACM (JACM), 64(5), 1-24. [3] Garivier, A., Hadiji, H., Menard, P., & Stoltz, G. (2022). KL-UCB-switch: optimal regret bounds for stochastic bandits from both a distribution-dependent and a distribution-free viewpoints. The Journal of Machine Learning Research, 23(1), 8049-8114. [4] Cesa-Bianchi, N., Gentile, C., Lugosi, G., & Neu, G. (2017). Boltzmann exploration done right. Advances in neural information processing systems, 30. --- Rebuttal Comment 1.1: Comment: I acknowledge reading the entire thread of reviews and corresponding rebuttals. On this specific rebuttal, I'm satisfied with answers 1-2-3. For answer 4, I believe that the KL-UCB-Switch paper is a good example of a paper with precise bounds not relying on O(...) terms, but perhaps this is too high a standard. I still believe that better proof sketches could have been provided, beyond the mere descriptions of the proof structures. All in all I am ready to increase my score to 5 and will update my report accordingly. --- Reply to Comment 1.1.1: Comment: We can give an exact bound of KL-MS’s regret by replacing the Big-O term in Eq. (3) with exact constants. Specifically, the exact form of Eq. (3) is $$\mathrm{Reg}(T) \leq T\Delta + \sum_{a: \Delta_a > \Delta} \frac{\Delta_a \ln(T \mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a) \vee e^2 )} {\mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a)} +\left( \frac{34}{c^2} + \frac{8}{(1-2c)^2} \right) \cdot \sum_{a: \Delta_a > \Delta} \left( \frac{\dot\mu_1 + \Delta_a}{\Delta_a} \right) \ln \left( \left( \frac{\dot\mu_1 + \Delta_a}{\Delta_a^2} \wedge \frac{T\Delta_a^2}{\dot\mu_1 + \Delta_a} \right) \vee e^2 \right) $$ . As an example, if we choose $c=\dfrac{1}{4}$, the final regret bound given by Eq. (3) would be $$\mathrm{Reg}(T) \leq T\Delta + \sum_{a: \Delta_a > \Delta} \frac{\Delta_a \ln(T \mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a) \vee e^2 )} {\mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a)} +576 \cdot \sum_{a: \Delta_a > \Delta} \left( \frac{\dot\mu_1 + \Delta_a}{\Delta_a} \right) \ln \left( \left( \frac{\dot\mu_1 + \Delta_a}{\Delta_a^2} \wedge \frac{T\Delta_a^2}{\dot\mu_1 + \Delta_a} \right) \vee e^2 \right) $$. To see this, we first note that Lemma 10 is exact in that it does not hide constant factors. By tracking the exact constants in the proof of Theorem 1 (lines 435-438), we have that $$ \mathbb{E}\left[ N_{T,a} \right] \leq \frac{\ln(T \mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a) \vee e^2 )}{\mathsf{kl}(\mu_a + c \Delta_a, \mu_1 - c \Delta_a)} + \left( \frac{34}{c^2}+\frac{8}{(1-2c)^2} \right) \cdot \left( \frac{\dot\mu_1 + \Delta_a}{c^2 \Delta_a^2} \right) \ln\left( \left( \frac{\dot\mu_1 + \Delta_a}{c^2 \Delta_a^2} \wedge \frac{c^2 T \Delta_a^2}{\dot\mu_1 + \Delta_a} \right) \vee e^2 \right). $$
Summary: The paper studies the classical regret-minimization problem in the stochastic multi-armed bandit framework. In particular, the manuscript's focus is on randomized algorithms with an aim to develop one with closed-form arm-selection probabilities at each step. Data collected by such algorithms can be used for offline policy evaluation. The manuscript proposes an algorithm called KL-MS for bounded-support distributions, that achieves KL-style regret guarantees. It is asymptotically optimal (in the instance-dependent stochastic sense) for bernoulli bandits, and order optimal for more general bounded-support distributions. It also enjoys an optimal worst-case regret guarantee. The paper also presents numerical study comparing the offline evaluation when the data is collected using the proposed KL-MS and Thompson Sampling with Monte-carlo on bernoulli bandits. Strengths: The paper is written well and easy-to-read. I particularly enjoyed the various remarks and discussions after the results, providing insights into the results and comparing the analysis to existing ones. While in some settings randomness should also be treated as a resource (and hence be used with care), there are indeed benefits to using randomized algorithms in other settings. For example, as highlighted in the paper, the data collected by randomized algorithms can be used for offline evaluation. The paper develops an algorithm that simultaneously satisfies different desirable properties, while being optimal (or close-to-optimal) in a non-parametric setting of bounded-support distributions. Weaknesses: 1. The plots in the appendix are not very clear. The text along the vertical lines is overlapping and unreadable. It would be good to spread-out the fugures and probably figures in a vector form for clearer display. 2. In view of the recent results on fragility of optimized bandit algorithm, I believe that the MAB results should be studied and stated beyond the expected regret. See for instance "Fan, Lin, and Peter W. Glynn. "The fragility of optimized bandit algorithms." arXiv preprint arXiv:2109.13595 (2021)." Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In line 21, the Reg(T) is referred to as pseudo-regret. How is this different from the usual expected regret? Why "pseudo" in the regret? 2. How would the analysis change (or challenges in extending) if instead of Bernoulli-kl in the exponent, one uses KL (the lower bound opt.)? Could the current results and analysis be already extended to that algorithm? I believe that is then the MED algorithm? It would then suggest a natural way to extend the algorithm and analysis for exponential families and more general distributions, like the heavy-tailed ones considered in Agrawal, Juneja, Koolen, 2021. A discussion along these lines would be interesting to see. 3. How does MS compare with KL-MS numerically? 4. How is the performance of the algorithm affected if the bounds [0,1] are not exactly known. For example, if the samples are from some misspecified setting, i.e., distribution with support in [-0.5, 0.5] but the algorithms is guaranteed only [0,1]-supported distributions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for all responses and for taking the time to review our work and provide valuable feedback thoroughly. *(1) The plots in the appendix could be clearer.* We appreciate the reviewer pointing out the problem. We will make necessary modifications to the plots to present them more clearly. E.g., for the plots in the appendix H, we will remove the (overlapping) vertical text and retain the legend to clarify the plot. *(2) What is the definition of pseudo-regret?* For the definition of pseudo-regret, we follow the definition of [1, Eq. (1.4)]. This is the same as the 'expected regret' notion in the Fan & Glynn reference you gave (and probably what you meant by 'expected regret'). Note that another notion of 'expected regret' is in the literature [1, Eq. (1.2)]. We will add a remark in the final version, clarifying this terminology overload issue in the literature. *(3) The MAB results should be studied and stated beyond the ‘expected regret’ (in the sense of Fan and Glynn, 2021).* Thanks for the reference. We agree that studying the tail property of the pseudo-regret (in the sense of Fan and Glynn, 2021) of MS is interesting and leave it for future work. *(4) How would the analysis change (or challenges in extending) if instead of Bernoulli-KL in the exponent, one uses KL (the lower bound opt.)?* By the KL (the lower bound opt.), we think that you meant KL( empirical distribution of arm a, maximum empirical reward) in the sense of the KL defined in our line 119. If so, indeed this would be the MED algorithm. We don’t yet know how to adapt our analysis to provide a new analysis of MED; although we agree that this is an interesting question for further investigation. *(5) How does MS compare with KL-MS numerically?* We added two plots in the global response to show an comparison between MS and KL-MS in the [0,1]-bounded reward stochastic bandits environments. KL-MS has lower regrets overall, due to its exploitation of the [0,1]-bounded reward structure. *(6) Does KL-MS work in a misspecified model setting, say the support of the reward set is [-0.5,0.5]?* In this case, KL-MS may not work correctly: according to the KL-MS algorithm (Eq. (2)), KL is undefined if $\hat{\mu}_{t-1,a}$ is negative. [1] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems, 2012 --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for your response. I acknowledge reading the entire thread of reviews and corresponding rebuttals. For now, I don't have further questions.
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to review our work and provide valuable feedback thoroughly. Here we address two common points shared by reviewers. **The comparison between algorithms in terms of regret.** We chose the reward setting following the experimental setup of Thompson sampling literature [1], where we consider two 2-arm bandit environments with expected rewards being [0.2, 0.25] and [0.80, 0.90] respectively, with 2000 time simulations to estimate the regret of KL-MS, MS and Thompson Sampling. Our result (see attached PDF file) shows that in terms of performance: (1) KL-MS is better than MS by exploiting the variance information of all arms; (2) KL-MS performs worse than Bernoulli Thompson Sampling; we suspect that this is due to Thompson sampling exploiting more aggressively in such relatively-easy environments. **The proof idea and the novelty in the analysis (R2, R4).** Since the focus of our paper is on establishing both asymptotic and finite-time regret guarantees for the KL-MS algorithm (specifically, asymptotic optimality in Bernoulli setting, sub-UCB property, $\sqrt{ \dot\mu_1 \ln K }$ minimax ratio), we need to give a finite-sample bound on $\mathbf{E}[N_{T, a}]$, the expected number of pulls to suboptimal arm a. To this end, we divide it into four parts (divisions similar in spirit is standard in the analysis of bandit algorithms, e.g. [1, Eq. (2)]), and bound each part respectively. The four parts are: * u, a burn-in term, * F1, which corresponds to the "steady state" when the empirical means of arm a and the optimal arm are both estimated accurately; * F2, which corresponds to the case when the empirical mean of arm a is abnormally high; * F3, which corresponds to the case when the empirical mean of the optimal arm is abnormally low. As we mention in lines 272-275, bounding F1 and F2 are relatively straightforward, similar to the MS analysis [3]. Our main technical challenge lies in the analysis of F3. For that term, we had a detailed 'roadmap of analysis' in Appendix D.3.1 that explains our intuition and the main techniques used. Of these, we wish to highlight two techniques from our paper that can be of independent interest: * a careful double-integral argument that simplifies prior works that bounds the expectation of a certain function of the empirical reward using a tail probability bound on the empirical reward, which already establishes Bernoulli asymptotic optimality, sub-UCB property, and $\sqrt{\dot\mu_1 \ln T}$ minimax ratio (lines 481-487). This effectively mimics a “peeling argument” over uncountably infinite layers. We can avoid unnecessary analysis by using the double trick instead of deploying a fine-tuning peeling device as in [2], which may become frustrating in the Bernoulli case. * To further establish a $\sqrt{\dot\mu_1 \ln K}$ minimax ratio, we conduct refined analysis on F3 by splitting the cases based on whether the value of $N_{t,1}$ exceeds H, a new choice of threshold that ensures adaptivity to $\dot\mu_1$ (lines 498-512 and Remark 7). We apologize that the important Appendix D.3.1 (which carries our intuition and summarizes the key techniques) was not linked from the original main submission; we will add that link from the main paper in the final version. [1] Kaufmann, E., Korda, N., & Munos, R. (2012, October). Thompson sampling: An asymptotically optimal finite-time analysis. In International conference on algorithmic learning theory (pp. 199-213). Berlin, Heidelberg: Springer Berlin Heidelberg. [2] Agrawal, S., & Goyal, N. (2017). Near-optimal regret bounds for thompson sampling. Journal of the ACM (JACM), 64(5), 1-24. [3] Bian, J., & Jun, K. S. (2022, May). Maillard sampling: Boltzmann exploration done optimally. In International Conference on Artificial Intelligence and Statistics (pp. 54-72). PML Pdf: /pdf/e7c0800aae9047df073bdcca1b475e7c159621ff.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Accept (poster)
Summary: This paper investigates how the transformer-based model can compositionally learn some complex functions (e.g., an MLP) by breaking them down to some atom problems (e.g., linear mapping). Such an ability is also the crux of the success of Chain-of-thought (CoT) in-context learning (ICL) methods in large language models. Hence this paper bridges the gap between these two fields by theoretically analyzing the sample complexity of different methods and the accelerating effect of CoT in pertaining. Although I believe the contribution of the paper is solid, novel, and helpful to the field, there are plenty of problems that make the paper hard to follow. I wish the authors could polish the paper and tackle some of my concerns to make the paper stronger. Hence at this stage, I would only give a borderline rejection. I would be very happy to increase my score during the rebuttal phase. Strengths: See the summary part. Weaknesses: 1. I find the paper hard to follow, maybe because the theoretical part, which I believe is the most important contribution, is too abstract. It is not easy to get intuition from current section 3.2, so maybe explaining how theorem 1 is formulated is helpful. 2. The experimental parts are not well organized. First, it appears almost everywhere (in section 2,3,4), which breaks the flow of the paper significantly. Also, some of the subfigures in Fig3,4,5 contain similar information. So it would be good to re-organize these results to make the paper easier to follow. 3. In section 4.3, the paper switches from the ICL setting to the pertaining setting. It is beneficial to clearly explain what’s the difference between them and why we need experiments in this setting. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. For the experiment in Figure 1, will ICL and CoT-I converge to a lower value if the number of samples keeps increasing? Because the sample complexity of CoT-I/O is lower than the other two and it converges at around 100 samples, would the other two methods converge if we use 500 samples? 2. In Figure 2,3, it is hard to distinguish different methods using dashed lines, using different markers might be good. 3. In Figure 3, if we want to see the influence of k, why not draw a figure using k as the x-axis? 4. Figure 3 is on page 2, but it is first referred at page 4 — it is hard to locate the figure when reading the paper. 5. Some typos and imprecise expressions: a.) In section 3.1, the first sentence, ‘we train 2-layer MLPs’. Do we train this MLP? IIUC, this MLP is used to generate the training samples and is fixed all the time. b.) In section 3.2, the last paragraph, ‘consider the consider the condition’. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: I’m not sure whether this is an unavoidable problem of the CoT-style ICL, but it would be nice if the author can answer the following problem somewhere in the paper, which might make the paper more solid. I think comparing the sample complexity (i.e., the number of examples in prompting) between these 3 algorithms is unfair. Imagine we have $k$ examples and the problem has L steps. Then in ICL, there are only $k$ input vectors and 1 supervisory signal. In CoT-I, there are $Lk$ input vectors and 1 supervisory signal. For CoT-I/O, there are $Lk$ input vectors and $(L-1)k$ supervisory signals (I’m not sure in this paper's setting, whether ground truth $s_n^l$ is accessible in CoT-I/O, but if that is true, the comparison would be more unfair.) Given the differences of the input and supervisory signals, it might be straightforward to conclude that CoT has smaller sample complexity than ICL. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and encouraging feedback. We hope that we have addressed your questions and concerns adequately below. > **W1. Theory is too abstract:** We provide below some more detailed description of the construction that implements this filtering, which we will ensure to include in our revised version. > > First off, we include a figure in the uploaded pdf which shows the process in a high level depiction. The idea is to filter out pairs of data that are relevant to the current prediction. So, assuming that we currently want to predict the $l$-th layer's output. In that case we want to filter out the input data $x_{l-1},x_{l}$, $x_{L+l-},x_{L+l}, ...$, where $L$ is the total number of layers and $x_i$ are the input data. To do so, we implement the following steps: > >1. Given $n$ bits $b_i$, which take values $0$ or $1$ we zero out any data point $x_i$ that its corresponding bit is zero. 2. The second step is to construct these bits, which are the indicators of what needs to be filtered. As we also mention in our main paper, this procedure of filtering is agnostic to the token-to-be predicted and it is implemented in an automated way. > > A detailed proof is given in section A of the appendix. Finally, we will highlight that the *self-attention mechanism plays a critical role in both filtering and ICL stages highlighting the transformer-specific nature of our theorem*. For instance, attention layer is crucial for selecting which tokens should be processed next (see the attached Figure 1 for visualization). We will provide a discussion on this and also (experimentally) visualize the theorem's message on "Chain-of-Thought <=> Filtering + ICL + Looping" via attention maps. > **W2. Reorganize experimental section:** We recognize the lack of organization in our work and have made plans to revise and restructure it. Specifically, we will combine the experimental results and focus Section 3 purely on theory. Details can be found in the general response. > **W3. ICL vs Pretraining setting** Thanks for raising this question: The initial part of the paper focuses on the inference phase of in-context learning. That is, we are interested in the number of examples in the prompt to correctly predict the test query. By "pretraining" we mean the training phase of the in-context learning and we study the number of prompts the transformer needs to be trained with so that it can successfully in-context learn during inference. Our conclusion (based on deep linear MLPs) is that CoT helps improve the sample complexity of this training phase by learning shortcuts to represent complex functions. To avoid confusion, we will replace all "pretraining" phrases with "ICL training" > **Q1. Will ICL converge given more in-context samples? (attached Figure 5)** We appreciate the reviewer's query and the short answer is: NO (unless we enlarge the model size). There are two determinants of model learnability in in-context tasks: sample complexity and model expressivity. Sample complexity pertains to the number of samples needed to precisely solve a problem. However, when the transformer model is small, even with a sufficiently large number of samples, due to its lack of expressivity, ICL cannot achieve zero test risk. This contrasts with CoT, which decomposes complex tasks into simpler sub-tasks, thereby requiring smaller models for expression. Figure 4 in the paper illustrates the expressivity of different GPT-2 architectures, showing that the tiny GPT-2 model is too small to express even a single layer of 2-layer MLPs. Additionally, we have run more experiments, and the results are shown in Figure 5 in the attached file. Both Figures 5(a) and 5(b) detail training models with MLP tasks of dimensions $d=10$ and $k=4$. In Figure 5(a), we use a small GPT-2 model, and the results show that the test risk stops decreasing even with more in-context examples. In Figure 5(b), we train a larger model, and the results demonstrate that the standard GPT-2 is sufficient to express a 2-layer MLP with $d=10$ and $k=4$. > **Q2.** Thank you for the constructive point and suggestion. We will integrate the changes in the updated version. > **Q3.** We appreciate this perspective. We did consider this during our research. However, since test risks fluctuate significantly with in-context sample sizes, plotting the x-axis as $k$ presents challenges. We would have to either focus on a specific in-context sample size or display the averaged risks, making it difficult to determine whether CoT is universally superior to ICL or only in particular cases of in-context samples. What's more, current plots could clearly show the performance alignments over different settings. Nevertheless, we still wanted to illustrate how the test risks vary with $k$, and for this purpose, we have included a bar figure in Figure 5(a) in the paper, showing the averaged risks. > **Q4.** We apologize for the lack of organization and appreciate your patience. We have devised a plan to reorganize our work, as outlined in the general response, and we hope this addresses your concerns. > **Q5.** Thank you for pointing out the errors, and we apologize for any confusion. The sentences and typos have been corrected. > **Limitations:** We apprepriate the reviewer's thoughtful feedback. While we agree ICL has a smaller in-context window size compared to CoT, we want to emphasize that our setting is aligned with the common practice where reasoning prompts are typically denser than merely providing final answers, and explanations prove helpful. Moreover, as Reviewer zLZS syggests, we have conducted varant ICL experiments by filling the explanation with some random data and the results are shown in attached Figures 4. Fig 5(a) further illustrates that even with more in-context samples, ICL cannot surpass CoT due to task complexity and model expressivity limitations. --- Rebuttal Comment 1.1: Title: Thanks for the feedback. Comment: Thanks very much for the authors' feedback, which solves most of my concerns. As I mentioned in my original review, I believe the contribution of this paper is valuable and inspiring. Considering the author will improve the presentation in the next version, I would increase my score from 4 to 6. I am looking forward to see the final version. --- Reply to Comment 1.1.1: Comment: Many thanks for your thoughtful review and positive feedback! Your suggestions have been very valuable, we have revised and further improved our presentation based on them. Title: Thank you!
Summary: The paper proposes to study chain-of-thought (prompting) in the setting of learning MLPs. The authors build on top of recent work studying in-context learning linear regression tasks in the light of gradient descent and extent their setting to learning non-linear functions. In order to study chain-of-thought prompting they either provide features / hidden activations from the "teacher" MLP the student (CoT-I setting) or make the Transformer produce its own input by "looping" (CoT-I/O setting). They provide theoretical results and empirical evidence that the Transformer is more sample efficient due to CoT by leveraging the given inputs and/or can remember weights coming from the family of teachers to allow for the self-production of features due to looping. Strengths: I like the abstraction of CoT presented in the paper. The experiments are convincing and supported by the theoretical results. Although this setting is quite simple, it nicely extends the recent studies of gradient descent on linear regression. The results and the line of thinking is very intuitive, I like the ideas and the empirical execution of the paper. Weaknesses: Although I am very familiar with the setup of the paper, I still had problems understanding it. The presentation can be made much clearer. Please work on your presentation and think about how to structure the paper in a clearer and structured way. It is a bit confusing that you have 1.5 empirical sections and the paper is very dense. I think it would benefit the paper if the authors work on restructuring the presentation given the (hopefully available) extra page if accepted. It would be nice to clearly explain what is meant with "learning an MLP". This differs from classic student teacher frameworks since you are not learning and also don't have to learn (in the CoT-I setting as far as I understand) the weight of the MLPs but solely rely on gradient descent on a linear regression task which acts on the given features in-context. Given these features in-context, it feels quite obvious that the CoT setting is indeed outperforming classic in-context learning. Nevertheless, I think it is still quite interesting. Also it would be helpful to clearly state your abstraction/hypotheses that in LLMs the additional data needed to outperform plain ICL are "features" of the data motivating your MLP abstraction. This only became clear to me after reading a few times. In the CoT-I/O setting, please explain why you are not using MLPs but linear deep nets as the teacher. Although the presented experiments make sense, the training of Transformers is usually done differently as TFs are not trained to do CoT-I or CoT-I/O. Please comment on this. There are a couple of ablations / interesting experiments I would like to see. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My main concern is the following: Given that CoT is given the activations of the teacher, the sequence length of these experiments is larger. I think it is quite crucial to contrast the performance of your trained models when given different input data. If I understand correctly, you are hypothesizing that CoT works in LLMs is because the extra data is representative of features of the input data. 3 Naive ablations come to mind: Just train a model on your setup where the extra data that is provided during CoT (s_0, ...,) is either constant, random or actually the same data given multiple times. That would provide more evidence that in your setting indeed the teachers activations are crucial. It could (I doubt it myself) be that just more naive prompts do the trick. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I think the limitations that the setting is quite constructed and might not have anything to do how CoT works in LLMs could be a bit more strong. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and encouraging feedback! We hope that we have addressed your questions and concerns adequately below. > **Lack of clarity:** We are sorry that the reviewer feels this way. Taking the reviewer's suggestion into sincere consideration, we are working on restructuring the paper to improve readability and hope that the new manuscript will be much better structured and easier to read. We have discussed our plan for reorganization in the general response. Basically, we will keep Section 3 pure theory, pack all the 2-layer MLP experiments (current Sections 3.1 and 4.2) together in Section 4, and Section 5 will further introduce our deep linear MLP results. We are glad to hear back if you have any further suggestions and comments. >**Explain "Learning an MLP":** We are encouraged that the reviewer finds our work interesting! In our work, we define "learning an MLP" as in-context learning a function whose performance is close to the target MLP. This is indeed as the reviewer comments, that transformers have the ability to implicitly learn the target MLP from the demonstrations without altering the model parameters. We will clarify this further in our updated manuscript. We also appreciate the reviewer's comment on the implicit gradient descent ability of ICL. Although the result that CoT > ICL is intuitive and not surprising, to the best of our knowledge, seldom do works look into its mechanism. Our work introduces a novel framework to explain it as CoT = filtering + ICL. We also provide more discussion regarding this in the general response and visualize it in attached Figure 1. Specifically, CoT prompts are filtered out into different steps of the ICL process, and the model performs gradient descent for each ICL and links the output sequentially. Theoretical and empirical evidence in this paper has been provided to prove it. To reiterate, our goal is to better understand an existing phenomenon rather than propose a new one. > **Clarify the connection to LLMs:** Thank you for the suggestion. We will definitely add more discussion regarding how our CoT setup can be related to real NLP settings in the revised manuscript. > **Why using deep linear MLPs instead of standard MLPs?** Great question! We could have indeed used standard MLPs. However, a drawback of ReLU activation is that, even with proper normalization, the feature distribution becomes more and more heavy tailed as we get deeper in the network. This heavy tail impedes the training process of the transformer resulting in very slow and brittle experimentation when depth is large. We opted to use deep linear MLPs as they better preserve the input distribution (especially with random unitary matrices). A middle ground could have been ResNet :) Let us know if you have further questions. > **TFs are not trained to do CoT:** We would like to remark that the next token prediction loss exactly mimics the CoT-I structure. What is unclear is whether the text in the pretraining data has a compositional structure. We believe that Nye et al. (check ref)'s work on "take it step by step" actually indicates that this is true since the pretrained model responds to the "step by step" prompt. > **Ablation experiments (attached Fig 4):** In attached Figure 4, we run the ICL experiment based on the settings the reviewer suggests. Due to time limitations, we are still working on the scenario where intermediate space is filled with repeatting data. Here, we train a small GPT-2 with $d=10$ and $k=8$. Blue, orange, and green curves show the evaluation results using the methods introduced in the paper. Here, we also try a variant setting where the prompt is provided in the form $(x,[?],y)$, and solid, dashed black curves show the results where the padded intermediate feature [?] is random and constant $-1$, respectively. We choose $-1$ as the constant number since the correct feature is outputed after ReLU and is non-negative. Results have shown that without given meaningfully intermediate information, ICL finds it hard to target the correct task functions. Finally, we apologize for missing the limitations discussion, and we agree with the reviewer that this work lacks an explanation of the connection between our MLP-based setting and LLM applications. We will further modify our paper to articulate our work more deeply and clearly. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thank you for your clarifications and running the additional experiments. Given the commitment for restructuring the paper, I vote to accept the paper but will not increase my score further. As a few of the reviewers also had troubles understanding the paper, I feel that this was indeed a shortcoming of the paper that hopefully will be resolved if accepted. I encourage the other reviewers to rethink their rather, in my opinion, low scores given that the authors will invest time to improve the papers presentation. I find the results very valuable for the audience of NeurIPS and should be discussed at the conference. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the encouraging assessment and positive recommendations. Their suggestions definitely helped improve the organization and clarity of our paper.
Summary: This paper aims to demystify the mechanism lying in the in-context learning (ICL) and chain-of-thought (CoT). It reveals how CoT significantly reduces the sample complexity of ICL. It uses a two-layer MLP, and a backbone GPT-2 model for exploration. The experimental results reveal some interesting findings, e.g., the in-context samples needed is linearly dependent on the input dimension. The paper also provides theoretical analysis of probable approximation of MLPs via chain-of-thought. Strengths: - The paper provides both experimental and theoretical evidence to support its claims about how CoT works. - The findings about the relations between the in-context samples needed and the input MLP dimension are interesting and insightful. Weaknesses: - Lack of clarity: Some parts of the paper may be unclear or difficult to follow, which could make it challenging for readers to understand the key findings and contributions of the study. Some of the concepts are not explained very well in the beginning (e.g, what is compositional learning? What is MLPs in-context), thus making the paper abstruse. - The main contribution of the paper is not clear. The paper claims to dissect the mechanism of CoT Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How do you use the intermediate MLPs features as the CoT prompts for the GPT-2? It seems that the features are not tokens to be prompts. - Why do you use the input dimension d and hidden size k of a MLP layer as the measures for experiments? Are there any specific reasons to use them as the proxy for some properties of the target task? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitation section is found. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and for recognizing the interest and insights in our work. We hope that we have addressed your questions and concerns adequately below. > **Lack of clarity:** We apologize to the reviewer for any confusion stemming from the unclear aspects of our work. Our commitment to enhancing clarity will be reflected in our planned revisions. While we're unable to make changes during this review stage, our general response outlines our reorganization strategy, and we're eager to consider any additional recommendations. We'll also refine the introduction to better articulate our motivations, main contributions, and notations. > * **_Compositional learning_:** It targets a learning method that decomposes a complex problem into several intermediate stages, addressing each sequentially and combining the solutions to render a final prediction. CoT illustrates this method well. Unlike ICL, which requires recovering a function $f$ from data such as $(x_1,y_1,x_2,y_2,...)$ where $y=f(x)$, CoT utilizes prompts of the form $(x_1,s_1,y_1,x_2,s_2,y_2,...)$ where $s=g_1(x)$ and $y=g_2(s)$, and solves the functions $g_1,g_2$ separately. By composing them to obtain $f:=g_2\circ g_1$, CoT leverages a compositional advantage. This makes the process of solving subfunctions more feasible and sample efficient. > * **_MLPs in-context_:** We regret the ambiguity. Our statement that "_transformers can in-context learn MLPs_" means that transformers are capable of learning MLP tasks from in-context samples without altering the model weights, which diverges from traditional learning scenarios where model weights are tuned to optimize performance on specific tasks. > **Contribution is not clear:** We apologize that this does not come through. In this work, our primary aim is to dissect and elucidate the mechanism of CoT in a more accessible and simplified setting. In the context of LLMs, the complexity arising from the vast amount of pretraining data and intricate semantic information makes it challenging to discern why concepts such as "let's think step by step" are effective. Our work confronts this challenge by modeling the CoT prompt using random MLPs. Through both empirical and theoretical results, we demonstrate that the transformer's strength in CoT lies in its ability to learn compositional functions and separate out filtering and learning processes. This novel understanding of CoT's underlying mechanism results in significantly improved sample efficiency in both the training and in-context inference stages. We have included an attached Figure 1 to offer a visual representation of these findings, enhancing understanding, and have also provided a more comprehensive explanation in the general response. > **Tokens vs prompts:** We concur that GPT-2 contains embedding layer that embeds language words into continuous vector. For our MLP-based prompting study, we clarify that we've substituted the GPT-2 embedding layer with up and down linear projections, aligning with Garg et al. This allows tokens to map into the same dimension, and we'll add further implementation details in our experiment section. > **Why use $d$ and $k$ as proxy for target task difficulty?** We thank the reviewer for this insightful question. To study the compositional learning ability and sample complexity of CoT, in this work, we explore the learning of complex functions $f=g_2\circ g_1$, which can be decomposed into two less intricate tasks. Using 2-layer MLPs as examples, the intermediate feature is generated via $s=g_1(x):=\phi(Wx)$ with $W\in\mathbb{R}^{k\times d}$ presenting the weights of the first layer and $\phi$ as the activation. The second/final layer is functionalized by $y=g_2(s):=v^\top s$ where $v\in\mathbb{R}^k$. CoT prompts are thus framed as $(x,s,y)$. Based on this setting, $(d,k)$ are the hyperparameters that control the complexity of the MLP, and increasing $d$ and $k$ correspondingly heightens the difficulty of recovering the MLP. To solve 2-layer MLP with dimensions $d,k$, ICL indeed necessitates $O(d*k)$ in-context samples, whereas CoT only requires $O(\max(d,k))$ samples, thanks to its innate compositional learning capacity. We sincerely apologize for failing to include the limitations in the current version of our paper. We acknowledge that addressing the potential limitations is vital for a complete understanding of our work, and we have taken steps to include them in our general response. We are also committed to incorporating the reviewer's insightful comments to enhance our paper by adding further clarification and details to highlight and motivate our primary contributions. Thank you for bringing these vital aspects to our attention. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I don't have any further questions.
Summary: In this paper, the authors explore the mechanics of Chain-of-Thought (CoT), a method that has successfully enabled language models to handle complex reasoning tasks by decomposing them into simpler steps. The study aims to understand the underlying mechanics of CoT by investigating its impact on the ability of transformers to in-context learn a simple yet general family of compositional functions: multi-layer perceptrons (MLPs). The authors reveal that the success of CoT can be attributed to two distinct phases: focusing on data related to each step of the composition and in-context learning the single-step composition function. They provide experimental and theoretical evidence that demonstrates how CoT significantly reduces the sample complexity of in-context learning (ICL) and facilitates the learning of complex functions that non-CoT methods struggle with. Strengths: 1. This work explores CoT in the learning of MLPs thatis a novel perspective. Such a simplicity can also help to dig the into inner machenism of CoT and ICL. 2. This work experimentally compare three schemes (ICL, CoT-I, and CoT-I/O), providing valuable insights into their differences and the benefits of CoT prompting. 3. The paper provides a formalized theorem that explains how a transformer architecture can realize the CoT process for MLPs. Weaknesses: 1. The Chain-of-Thought (CoT) implementation presented in this study appears to be limited in terms of reasoning steps, which may not be fully consistent with the original motivation behind proposing CoT as a method for complex reasoning tasks. In most existing work, CoT has the potential to assist in various complex reasoning tasks as it provides the model with step-by-step guidance on solving the input. Each reasoning step should include both the intermediate state and the process through which this state is generated from the previous one. This is typically implemented using natural language descriptions or formal language expressions. However, the CoT implementation in this paper focuses on the intermediate states in MLPs and does not provide information on how each intermediate state is produced. By only presenting intermediate states rather than complete reasoning steps, the CoT explored in this study primarily reflects the capability of state transition rather than reasoning. 2. The study appears to have limited analysis on generalization capabilities, which is a fundamental aspect of in-context learning (ICL). This study assumes that there is no distribution shift between the training and test datasets. However, a fundamental aspect of in-context learning (ICL) is the ability to learn new tasks from in-context examples. Consequently, it is crucial to examine the performance in a generalization setting. For example, [1], which shares a similar philosophy with this work, extensively discussed the behavior of ICL on out-of-distribution prompts. [1] Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformers learn in-context? a case study of simple function classes. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Do the conclusions drawn in this study remain consistent in an out-of-distribution setting? For example, what might occur if the MLPs in the test and training sets exhibit notable differences, such as having different widths? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: This paper does not explicitly address its limitations. The most significant limitation of the study is the unclear contribution towards answering the question of how the Chain-of-Thought (CoT) method can assist in solving complex reasoning tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and for recognizing the novelty and value our results contribute, particularly in understanding the inner mechanisms of CoT and ICL. >**Reasoning steps vs intermediate state:** We appreciate the reviewer's insightful comment on this matter. In the realm of NLP, the reasoning process targeting the intermediate state is intricately interwoven within the language sentences and pretrained LLMs. Our work, however, highlights that the advantage of CoT over ICL lies in the decomposition of complex problems into intermediate steps, rather than merely its competence in semantic comprehension. Though our MLP setting may seem to lack a distinct "reasoning process", it does implicitly include reasoning within positional embedding. Inspired by prior work [Garg et al. 2022, Wei et al. 2022], we employ a CoT-based MLP setting, enabling us to precisely and distinctly examine the underlying CoT mechanism. > >We respectfully disagree with the claim that "_the CoT explored in this study primarily reflects the capability of state transition rather than reasoning_". The process of ICL/CoT involves acquiring the ability to perform a novel task through demonstrations. When provided with too few in-context examples, accurate predictions become unattainable, as evidenced by the figures in our paper where fewer samples (smaller x-axis) correspond to larger errors. Thus, CoT method involves not merely state transition but an in-context learning phase to discern and restore the feature generation function. > **Out-of-distribution evaluations (Fig 2 in the attached file):** Great point! Our investigation has focused on the standard setting where train and test samples have identical distribution. Regrettably, we did not explore the out-of-distribution aspect of CoT in our original manuscript. Following the reviewer's suggestion, we have commenced new experiments, with preliminary findings shown in the attached Figure 2. In short, **our observations on CoT translate well to several out-of-distribution setting such as incorporating label or feature noise and misspecified input or hidden dimensions**. To elucidate our results and examine the test risk under varying distribution shift levels, we plot test risk evaluated when prompt has 100 in-context examples. In Fig 2(a), we analyze noisy in-context samples during testing. The solid and dashed curves represent the test risks, corresponding to the noisy in-context samples whose (input, output) takes the form of either $(x,y+\text{noise})$ or $(x+\text{noise},y)$, respectively. The results indicate that CoT exhibits greater robustness compared to ICL, and the test risks increase linearly with the noise level, with attributed to the randomized MLPs setting. Additionally, in Fig 2(b)(c), we instead explore out-of-distribution test tasks where test MLPs differ in $(d,k)$ from the training phase. For both subfigures, we firstly train small GPT-2 using 2-layer MLPs with $d=10,k=8$. In Fig 2(b), we fix $d=10$ and vary $k$ from $1$ to $8$, whereas in Fig 2(c), we fix $k=8$ and vary $d$ from $1$ to $10$. In both instances, the findings reveal that CoT's performance remains almost consistent when $k\geq 4$ or $d\geq6$, and ICL is unable to surpass it. The improved performance of ICL with smaller values of $d$ or $k$ again reinforces our central assertion that ICL requires $O(d*k)$ samples for in-context learning of the 2-layer random MLP, and reducing either $k$ or $d$ helps in improving the performance. Given that we employ the ReLU activation function, smaller values of $d$ or $f$ can lead to significant bias in the intermediate feature. Consequently, CoT cannot derive substantial benefits from this scenario, resulting in a decline in performance. To sum up, we thank the reviewer for raising this issue and will integrate OOD experiments in our revision. > **Robustness implications of our work on broader in-context learning** We also briefly discuss how our formalism that decouples CoT into filtering and in-context learning stages has adversarial/distributional robustness implications for in-context learning. The related works on in-context learning such as [Garg et al. NeurIPS'21, Akyurek et al. ICLR'23] focus on the scenario where input prompt contains IID (input, label) pairs. A natural question is: **What if the prompt contains non-IID data, for instance, is in-context learning robust to outlier features?** In this work, we show that transformer can provably *filter a heterogeneous prompt* to obtain an *purified prompt containing IID features* amenable to in-context learning (our Theorem 1). Thus, beyond CoT, our work has implications for outlier/adversarially-robust in-context learning. Our Filtering+ICL formalism has the following interpretation: *Train the transformer with heterogeneous prompts so that it learns how-to-filter. Then, it can implement outlier-robust ICL during inference time*. We will add a discussion on this in the last section. > **Lack of limitations:** We acknowledge that we should have provided an explicit discussion of limitations. We've now addressed potential limitations in our general response and intend to include a designated subsection in our revised version. We extend our gratitude to the reviewer once more for their insightful comments. In response to their feedback, we are actively engaged in enhancing our discussion to elucidate our problem more comprehensively. Additionally, we are delving deeper into the out-of-distribution aspect by conducting further experiments with varying parameters, such as different $(d,k)$ MLPs pretrained models, alternative GPT-2 architectures, diverse types of distribution, and more. --- Rebuttal Comment 1.1: Title: Thank You! Comment: I would like to raise the score from 4 to 5, as the additional experiments addressed my concerns regarding generalization. Nevertheless, I still do not comprehend how this study contributes to elucidating the mechanism of CoT in practical reasoning tasks. Consequently, I can only assign a borderline rating for this work. --- Reply to Comment 1.1.1: Title: Thanks and further clarification on CoT Comment: Dear Reviewer, Thank you for your experiment suggestions and for reevaluating our work. We acknowledge your concern. Below, we discuss the core features of CoT, based on which, we will explicitly distinguish **few-shot CoT** and **zero-shot CoT** in the final manuscript. We hope that this can address some of your concerns. 1. **Benefits of step-by-step problem solving:** Core of CoT is decomposing complex tasks and our work distills this essence into the MLP setting. Although we agree that practical CoT is not as structured as our MLP prompt, we establish **clear theoretical and empirical benefits of "step-by-step problem solving"** in terms of sample efficiency as well as model expressivity. 2. **The strategy CoT uses to solve the problem:** In practice, CoT (step-by-step decomposition) can happen in two ways: - **Option 1: Few-shot CoT.** In this setting, transformer leverages the examples and associated solutions provided in the context window to solve the new problem. For instance, it solves a new math problem by studying related problems and their solutions in-context. Most of our results, namely two-layer MLPs, and theory focus on this **few-shot** setting. Here, transformer infers "state transitions" (i.e. weight matrices of MLP) from in-context examples. - **Option 2: Zero-shot CoT.** In this setting, transformer creates the solution steps without any relevant examples. An intuitive explanation is that the model maps the input problem to a **memorized set of "skills" and skill transitions** and apply them step-by-step. For instance, standard operations like "+,-,x,/" can be memorized during pretraining. Zero-shot is possible because these memorized "skills" and "transitions" does not have to be inferred in-context (unlike Option 1). Our **Section 4.3** provides insights into this via Deep Linear MLPs. Here, unlike 2-layer MLPs with continuous weights, we use finitely many weight matrices, thus, transitions are discrete. Each matrix correspond to a "skill" that can be memorized by the transformer during training. Confirming this intuition, **Figure 6(a)** demonstrates that chain-of-thought can succeed with a single example whereas ICL needs more. This is because, with CoT, transformer can memorize the $K=4$ matrices and compose them, whereas ICL can't memorize all $K^L=4^6$ variations. Here, the single example reminds the transformer which skills to select from the memorized repertoire. This would in-fact work zero-shot if the skills can be determined from the input features, specifically, when input features serve as an informative "initial state" to kickstart the skill chain. However, as the reviewer has also mentioned, we have not delved into the informativeness of input features, and instead focused on learning state transitions. In the final manuscript, we will clearly distinguish few-shot CoT vs zero-shot CoT and provide a discussion of limitations, e.g., zero-shot inference of the chain from the input features. Thanks again for your valuable insights and we would be grateful to hear further feedback, Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments and insightful questions. We are gratified that many of the reviewers found our work insightful and interesting. In the following sections, we will summarize our key contributions, respond to shared concerns raised by the reviewers, and provide explanations for the new figures presented. We briefly recap our **main contribution and message**: There are several major works on in-context learning (ICL) theory (e.g. [Xie et al. ICLR'22, Garg et al. NeurIPS'22]). These focus on learning from the examples provided in the prompt. Compared to these, we consider a novel setting where prompt contains chain-of-features obtained from a multistep problem-solving (or reasoning) process. We show that when solving a new problem, **chain-of-thought operates by (1) filtering the heterogeneous prompt and retrieving the relevant features, (2) running in-context learning on the filtered prompt, and (3) Loop back to (1).** This process is illustrated in attached Figure 1. Our key contributions are establishing how transformers can provably implement this process and demonstrating the empirical and theoretical value of CoT in terms of approximation and sample complexity. In our Theorem 1, the attention mechanism provides a critical role in both filtering and ICL stages highlighting the transformer-specific nature of our results. We will further elaborate on this in the final manuscript. To proceed, we address the shared concerns by the reviewers: - **Paper Organization.** Reviewers 8qtM, zLZS, and wZge noted concerns with the paper's clarity, citing confusion over the existence of both empirical and experimental results sections. We appreciate their feedback and have laid out a plan to enhance the paper's readability: 1. Section 3 will be focused solely on theory, and additional discussion will be included to better motivate and clarify our theoretical results. For instance, we will include an illustration to elucidate how the theory functions (see Fig. 1 in the .pdf and also the response to Reviewer wZge for further details). 2. All experiments involving 2-layer MLPs will be consolidated in Section 4. 3. Section 5 will address the training aspects of CoT by exploring deep linear MLPs. - **Lack of Limitations Section.** We recognize our oversight in neglecting to include an explicit discussion of the paper's limitations and sincerely apologize for this omission. Rest assured, we will incorporate a limitations section in our revised manuscript. Our notable limitations include: 1. We should have provided a deeper study of out-of-distribution scenarios. In response to Reviewer mo8c, we have partly addressed this limitation (see Fig 2 in the pdf). 2. Our key contributions are theoretical and we work with a synthetic problem setting which extends [Garg et al. NeurIPS'22, Akyurek et al. ICLR'23] to compositional functions (namely MLPs) and CoT. **Supporting experiments:** We provide the following figures to address reviewer inquiries. - Figure 1 aims to clarify the chain-of-thought prompting formalized by our work by visualizing how CoT can be decoupled into filtering and in-context-learning stages. - Figure 2 provides new robustness experiments in response to Reviewer mo8c. In Fig 2(a), we add noise to the label $y$ (solid) or input features $x$ (dashed). This shows that CoT methods (trained on noiseless data) are fairly robust to noisy data at test-time. In Fig 2(b), we in-context learn an MLP with $k$ hidden nodes whereas transformer is trained for MLPs with $8$ hidden nodes. This shows that CoT-I/O is robust to misspecification as long as it is small. However, if $k$ is very small, then CoT-I/O suffers more distribution shift. This makes sense because CoT-I/O relies more on hidden features compared to others. In Fig 2(c), we consider misspecification of input dimension: TF is trained with $d=10$ but we feed a neural net with $d<10$. This reveals that CoT is pretty robust and, in fact, CoT-I is the most robust to misspecification. We speculate this is because CoT-I makes similar use of input and hidden features. - In Figure 3, we investigate deep nonlinear MLPs. The main takeaway is that, more CoT steps help but not that much. We believe this might be because of the random generation of neural nets. Specifically, it is possible that when neural net weights are fully random, a two layer neural net might be able to accurately approximate a 4 layer network. This would mean only 2 CoT steps are needed to do a good job. - In Figure 4, we put vanilla ICL in CoT format and feed $(x,[?],y)$ in response to Reviewer zLZS. We set [?] mark as random gaussian features or all [-1] vector. In both cases, the performance coincide with vanilla ICL $(x,y)$ which we found to be intuitive. - In Figure 5, we train the models with more in-context examples as requested by Reviewer wZge. The main conclusion is that: For small GPT, ICL can indeed not approximate the neural net even with many examples (unlike CoT) whereas, for large GPT, ICL can do so (although much less efficient). This is in line with our theoretical intuitions on the expressivity benefits of CoT. Finally, as for the concerns/questions raised, we believe that we successfully addressed all of them sufficiently and reply in line to each review. We would be grateful to respond further reviewer inquiries during the discussion week. Pdf: /pdf/db87b6b0cd1a8c17bd953f6d6cb3cb7f2ee3779a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
Accept (poster)
Summary: This research paper focuses on LVM-Med, a self-supervised learning (SSL) technique designed for medical imaging tasks. Based on a second-order graph matching strategy, LVM-Med is trained on a large-scale medical imaging dataset. The researchers found that the method significantly improves performance on a variety of downstream medical imaging tasks compared to other supervised learning methods and foundation models trained on large quantities of image-text data. These findings were consistent across two different architectures: ResNet-50 and Vision Transformer (ViT). Strengths: - The LVM-Med method was evaluated on various tasks, including segmentation, object detection, and image classification. The results were compared to foundational models like Clip, Align, Flava, and SAM. It performed particularly well on eight medical segmentation tasks, outperforming both 2D SSL methods trained on the same dataset and foundational models. - The training of LVM-Med on a large-scale medical imaging dataset indicates its capacity to handle and learn from large amounts of data, which is often a requirement in the field of medical imaging. - The LVM-Med model outperforms both supervised learning methods and foundation models trained on hundreds of millions of image-text instances. This includes a variety of popular models such as Clip, Align, Flava, and SAM. - The results show that LVM-Med performs well in in-out-distribution settings, implying a certain level of robustness and generalizability. - LVM-Med provides benefits in both end-to-end and prompt-based segmentation tasks. This flexibility can be particularly useful in real-world applications, where various segmentation scenarios may be encountered. - The authors also conducted an ablation study, experimenting with variations in LVM-Med's configuration to assess the importance of various components in the overall performance. It was concluded that all factors contribute to the final performance, with the second-order graph matching and Gumbel noise being the most significant. Weaknesses: - Only 2D backbone is conducted: The LVM-Med model focuses primarily on 2D backbone. Author should provide more discussion about the challenge for applying this framework in 3D backbone. - The LVM-Med model is trained on a large-scale dataset, how can author endure the testing dataset is not leaky in training set. Moreover, the use of a single dataset could potentially limit its generalizability. Ensuring robust performance across diverse datasets from different sources is critical. - The model's performance benefits seem to be tied to its training on a large-scale medical imaging dataset. If such a dataset is not available, the performance of the model might be significantly diminished. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the comment in weakness part. In addition, though the ViT architecture has more total parameters, in some cases, it is less effective than LVM-Med ResNet-50. More research and discussion would be needed to fully understand this performance discrepancy. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitation on limitations and broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive feedback! **Question 1: Add more discussion about the challenge of applying this framework in 3D backbone to the paper.** Thank you for the suggestions. Due to the large workload in dataset collection and experiments, we restricted this work to solving 2D backbone-related downstream tasks. For the 3D backbone for video or 3D volume classification, it requires a dedicated study to investigate further how to extend our proposed algorithm and design optimal architectures. We will add more discussion toward directions in the Limitations and Future Work Section, which includes the following points: - Extend the architecture in LVM-Med to dynamically receive temporal inputs like frames in videos or consecutive slices of 3D volumes rather than treating them independently (1). - (1) leads to how to modify the LVM-Med’s graph-matching to encompass combinatorial constraints among internal sections inside 3D volumes and between slices across different inputs. One possible solution is to leverage the deformable attention mechanisms, which only focus on a flexible small set of major slices conditioned on input data [1], resulting in saving computational complexity and permitting handling the multi-scale feature maps. [1] Zhuofan Xia et al., “Vision Transformer with Deformable Attention,” CVPR 2022. &nbsp; &nbsp; &nbsp; **Question 2: How can avoid the testing dataset not leaky in the training set?** We thank the reviewer for the great questions. In particular, to use LVM-Med and avoid potential testing data leaking during the training steps, there are some typical cases: - If the dataset used in downstream tasks does not belong to our 55 collected ones, users can freely download and apply our models. - If the dataset belongs to the collection, the user needs to check whether it has default training or testing. If some splitting is available, we follow them and only use the training samples; otherwise, the user needs to exclude the index of images (we sampled 20% of total images for training) trained by LVM-Med. We will release exact information for the later cases in our code repository. &nbsp; &nbsp; &nbsp; **Question 3: The use of a single dataset could potentially limit its generalization. Ensuring robust performance across diverse datasets from different sources is critical** In our large-scale medical datasets, collected images cover diverse body organs and data molarities, as demonstrated in Figure 1 (main paper) and Table 7 (Appendix). It is not from a single large dataset. While some datasets have a large number of samples, we address it by applying some balancing strategies in each mini-batch during the training procedure to avoid a potential bias toward a specific domain. In downstream experiments, we selected various types of data modalities, encompassing MRI, CT scans, X-rays, ultrasound images, and color images. Our settings also span eleven distinct organ structures (including tumors, heart, skin, brain, etc). The majority of these experiments have revealed favorable outcomes with LVM-Med compared to alternative reference models. These consistent records, therefore, highlight the strength and adaptability of LVM-Med across different areas of expertise. &nbsp; &nbsp; &nbsp; **Question 4: The model's performance benefits seem to be tied to its training on a large-scale medical imaging dataset. If such a dataset is not available, the performance of the model might be significantly diminished** We agree with the reviewer that the achieved performance has strongly aligned with the scale of the dataset we collected. This aspect indeed stands out as our primary discovery and is one of the most critical LVM-Med contributions alongside the novel self-supervised learning approach based on graph matching. We believe that our research will catalyze future investigations into the creation of expansive medical datasets and a deeper exploration of practical applications in real-world medical scenarios, thereby pushing the boundaries of utilizing machine learning within medicine. &nbsp; &nbsp; &nbsp; **Question 5: ViT architecture, in some cases, is less effective than LVM-Med ResNet-50. More research and discussion would be needed to understand this performance discrepancy fully.** In the **Limitations and Future Work Section**, we discussed this concern, i.e., ViT architecture performance with end-to-end learning. To gain a more comprehensive understanding of this phenomenon and effectively tackle it, we propose the need for additional experimentation. For instance, optimizing hyperparameters like projector heads and token feature dimensions through grid searches during pre-training is important. Another potential solution is integrating the trained architecture and fine-tuning it efficiently for downstream tasks. While our study explores the latter approach, we intend to delve into this aspect and update our findings in publicly available code repositories. For the pre-training hypothesis, we suggest it as a future work for investigation, given the huge amounts of required computations. --- Rebuttal Comment 1.1: Comment: Rebuttal solves my question well. In the context of training large-scale 3D medical data, there are some prior works. To enhance reader comprehension and provide a comprehensive outlook, it would be nice to add these into Limitations and Future Work Section. [1] Liu, Jie, et al. "CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection." arXiv preprint arXiv:2301.00785 (2023). [2] Ulrich, Constantin, et al. "MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation." arXiv preprint arXiv:2303.14444 (2023). [3] Wasserthal, et al. "TotalSegmentator: robust segmentation of 104 anatomical structures in CT images." arXiv preprint arXiv:2208.05868. (2023). --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, Thank you very much for reading our response and giving additional feedback. We will add the references suggested by the reviewer in the revision.
Summary: This paper proposes a self-supervised pre-training strategy for medical imaging using a graph matching approach. Each unlabeled image is transformed via a pair of data augmentations and then processed via an encoder network. The augmented pair of images become vertices in a pair of graphs, with vertex features being the encoder outputs and edge connections selected via k-nn. A graph convolutional network is trained for vertex-to-vertex matching of the extracted pair of graphs. The training objective incorporates global and local similarity learning over spatial features along with a second-order edge similarity cost. Due to the combinatorial nature of the objective, gradients for backpropagation are approximated via Implicit MLE. Pre-training is performed over a large scale dataset comprising 55 publicly available datasets and used for multiple downstream fine-tuning tasks including segmentation, detection and classification. Strengths: The proposed graph matching technique for self-supervised learning is a novel and significant contribution. Abundant experiments demonstrate the generalizability over downstream tasks, with error bars also included. Weaknesses: Some related works on graph matching in computer vision are missing in Section 2.3. Doi et al. Detecting Object-Level Scene Changes in Images with Viewpoint Differences Using Graph Matching 2022 Bian et al. Unsupervised Domain Adaptation for Point Cloud Semantic Segmentation via Graph Matching 2022 Wu et al. Unsupervised Visible-Infrared Person Re-Identification via Progressive Graph Matching and Alternate Learning 2023 Liu et al. Self-supervised Learning of Visual Graph Matching 2022 Peng et al. GATE: Graph CCA for Temporal SElf-supervised Learning for Label-efficient fMRI Analysis 2022 The authors should discuss and contrast with the graph matching objectives and applications in these works to highlight their novelty in self-supervised learning. Introduction can also be improved to better highlight the contributions. Rather than starting with discussing image-text datasets, the story should highlight the value of self-supervised learning in medical imaging and related works > proposed self-supervised learning method via graph matching > large-scale dataset collected for implementing this method > experiments on downstream tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors clarify how they use pre-trained ResNet-50 for downstream segmentation via U-Net? Does this mean that the encoder of U-Net has the same architecture as ResNet-50? This is not clear as the vanilla U-Net architecture (Cicek et al. 2015) is different. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your strongly positive feedback! **Question 1: Missing related works on graph matching in computer vision Section 2 & Improving further introduction part to highlight contributions.** ​​We sincerely acknowledge your constructive feedback. Your suggestions are valuable to us, and we will integrate these points properly to enhance both the introduction and the section on related works. These improvements certainly will make the paper more appealing, thereby highlighting our novelties. &nbsp; &nbsp; &nbsp; **Question 2: Clarifying how to use pre-trained ResNet-50 for downstream segmentation via U-Net.** It means that we replace the encoder network of U-Net architecture (Cicek et al. 2015) with the ResNet-50 architecture (feature outputs after five blocks of ResNet layers). The decoder layer of U-Net is then constructed so that the output at each up-sampling layer has the same feature maps as the original decoders of U-Net. In practice, using ResNet-50 or other architectures like VGG or Efficient-Net in UNet is a preferred way and can be applied in several applications such as skin lesions [1], lung segmentation [2], etc. Therefore, we want to show the benefits of LVM-Med for those study cases. [1] Nguyen, Duy MH, et al. "TATL: Task agnostic transfer learning for skin attributes detection." Medical Image Analysis, 2022 \ [2] Cheng, Dorothy, et al. “Transfer Learning U-Net Deep Learning for Lung Ultrasound Segmentation,” Arxiv 2021 --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and keep my original score. Thank you, --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, Thank you very much for reading our response and keeping the original positive score.
Summary: This paper collects a large medical imaging dataset, and it also shows that a self-supervised learning technique based on second-order graph-matching enhances performance in various downstream medical imaging tasks compared to other supervised learning methods and foundation models trained on image-text instances. The evaluation also considers two different architectures: ResNet-50 and ViT backbones. Strengths: - Creating such a large medical imaging dataset is a commendable feat, and is needed by the community. - The proposed self-supervised task is interesting. - The evaluation results are comprehensive and thorough. Weaknesses: None to report. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: These are included in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your strongly positive feedback! This encourages us to continue to improve and extend our research. --- Rebuttal Comment 1.1: Comment: I have read the comments, and I keep my original score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, Thank you very much for reading our comments and keeping the original strongly positive score! Best, Authors
Summary: The paper proposes a set of networks called LVM-Med which are trained on large-scale medical datasets. The authors collected more than a million medical images from more than 50 publicly available datasets of diverse modalities and structures of interest (e.g. CT, MRI, Ultrasound...). In the work, several self-supervised algorithms are benchmarked on the large dataset. Furthermore, this work proposes a self-supervised contrastive learning algorithm based on a second-order graph-matching formulation. Strengths: - the paper combines an incredibly large number of medical image modalities and images - I like the formulation of contrastive learning as a graph matching objective - the method section is comprehensive and the contributions are formalized Weaknesses: 1) Some aspects of the experimentation are unclear to me. From how I understand the text, the authors aim to compare to a large number of other datasets, baselines ("In 2D settings, we also compare with 2D supervised architectures, such as U-Net, U-Net++, Attention U-Net, etc."), and tasks across 2D and 3D. What I do not understand is how the authors choose their baselines and what they present in the tables. For example, in Table 2. for the Drive segmentation dataset, the authors report 2D supervised Methods (e.g., UNet) with Dice scores ranging from 59 to 65. Clearly, this is not a performance on par with scores reported in other works. From the literature, the state of the art in supervised DRIVE segmentation should be way higher. A fully supervised segmentation baseline on the DRIVE dataset should have 80+ Dice (https://paperswithcode.com/sota/retinal-vessel-segmentation-on-drive). Similarly, the IoU performance of BRATS baselines should be higher https://arxiv.org/pdf/1811.02629.pdf Potentially I misunderstand what kind of comparisons the authors provide here. Can the authors please explain the choice of their baselines and their experimental settings? 2) The clarity of the writing in some sections should be improved, e.g., in the experimentation. 3) The reproducibility of the results and methods is a concern. I have not seen the code. Furthermore, the sheer amount of computing required makes reproducibility challenging. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) The work has been run on a large set of data. Can the authors provide more detail on the overall computing required to replicate their experiments? 2) The work uses the reparameterization trick to create a complex discrete distribution. Can the authors explain the effect of the backpropagation and what this implies for the learning signal in more detail? 3) Why are there no baselines for alternative ways to define a self-supervised contrastive loss in the presented setting? I would like to stress that I have an overall positive impression of the work and would be willing to reconsider my rating based on the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive feedback! **Question 1: Unclear experiment settings** **1.a: How to choose baselines?** We employ four primary baseline types (e.g., in Table 2): 1. **2D Supervised Method**: Comparison with standard medical architectures initialized from ImageNet, while our model uses LVM-Med weights. 2. **2D-SSL on Medical Data**: Benchmarking our self-supervised algorithm against state-of-the-art SSL methods trained on the same collected dataset. 3. **Foundation Model**: Evaluating the performance of large vision/vision-language models (e.g., Clip, Flava, SAM) compared to LVM-Med's domain-specific approach. 4. **Prompt-based Segmentation**: User-interactive prompt for segmentation masks, highlighting LVM-Med's practicality besides end-to-end training as in (1), (2), and (3). **1.b: Drive and BRATS segmentation performance lower than literature** Thank you for pointing out these interesting points. After examining the papers mentioned by the Reviewer, we found out that the settings in our work are different, e.g., we use less training data (Drive) or just a single input rather than multiple inputs (BraTS). - **Drive Segmentation**: Our U-Net baseline with ResNet-50 (LVM-Med) achieved a 65 Dice score, differing from the literature (81 Dice) due to distinct settings. In particular, authors in [2] use overlapping image patches with a stride of 32 and an image size of 128 × 128 for training. Therefore **they ended up with 4200 images for training (we used 20 images as in the original training set)**. During testing, they applied overlapping image patches again with a stride of 3 and averaged predictions over 20 sub-patches to predict for each image in the test set. To provide further insights, we conduct additional experiments on the Drive dataset using U-net load LVM-Med ResNet-50. Our performance given the same settings as [2] is **84.2 Dice score on average, which works better than the baseline in [2]**. - **BRATS-2018 Segmentation**: Our 3D-IoU score is 73 for the whole tumor, compared to 88-90 Dice score by the best method [3]. There are two main different settings: (i) First, **we only use a Flair format for each patient, while [3] combined four available 3D MRI modalities of each patient into the 4-channel image as an input**. (ii) Second, we measure performance on a test set with 95 samples randomly selected from a training set of 285 patients while [3] reported performance on a test set of the BraTS-2018 competition, which is unavailable now because the competition has been closed. In conclusion, we utilize simple configurations for all datasets, skipping extra pre-processing for data augmentation (Drive) or input fusion (BraTS). We believe these default settings better showcase the benefits of using pre-trained models, especially with limited labeled data. Nevertheless, further experiment details will be included in the Appendix to avoid confusing readers. We appreciate the Reviewer's insightful feedback. [2] RV-GAN: Segmenting Retinal Vascular Structure in Fundus Photographs using a Novel Multi-scale Generative Adversarial Network, MICCAI-2021 [3] 3D MRI brain tumor segmentation using autoencoder regularization, 4th International Workshop, BrainLes 2018, MICAI. **Question 2: Reproducibility and code availability** We are committed to open-sourcing our code and pre-trained models. **We have sent the code to the Area Chair. We refer the reviewer to the Area Chair for obtaining the code**. In the code, we present detailed instructions in README.md and release ResNet-50 and ViT models trained by LVM-Med, along with the configurations of segmentation, classification, and object detection tasks. **Question 3: Computing details** As mentioned in **Section 4.1 Implementation details**, we trained ResNet-50 and ViT-B/16 on our dataset using high-powered GPU systems with 16 A100-GPUs, each with 80GB memory. The ResNet-50 took five days (batch size: 3200 images), and ViT-B took seven days (batch size: 2800 images) for 100 epochs. While the training procedures require a high-powered GPU system, our open-source weights will significantly reduce the time and financial investments necessary for other studies seeking to apply our results in medical applications. **Question 4: Reparameterization trick and backpropagation** We utilize IMLE for reparameterizing discrete distributions, outlined in Algorithm 1. To assess the efficacy of this backpropagation approach, we conducted a comparative analysis. Specifically, we compared our method (referred to as LVM-Med (Full) in Table 6) to an alternative technique employing constant value-based perturbation [50] (referred to as LVM-Med w/o Gumbel noise, Table 6). Through comprehensive evaluation across classification and segmentation tasks, we have consistently achieved superior performance than the alternative approaches [50]. For example, segmentation results with IMLE are 83.05, declining to 81.37 using [50]. [50] Optimizing rank-based metrics with blackbox differentiation, CVPR 2020. **Question 5: Baselines for contrastive loss in the presented setting** We thoroughly compare our approach with alternative contrastive losses (Twin-Barlon, Dino, SimCLR, Moco-v2, VicRegl) in Tables 2, 3, 4, and more (2D-SSL on medical data), which were trained using the same dataset as LVM-Med. Moreover, we demonstrated in Figure 1 (on the right) how the algorithms of LVM-Med can serve as a unified and extending framework for other contrastive SSL algorithms. We hope our response resolved most of your concerns, and helped you evaluate our work more positively. If you have other comments, we are happy to address them in the reviewer-author discussion period. --- Rebuttal Comment 1.1: Comment: Dear Reviewers, thank you for your rebuttal. The clarifications helped me a lot. The overall experimentation is impressive. I am still a bit puzzled by the initial choice of baselines. Given the computational requirements of your work "16 A100-GPUs, each with 80GB memory", simplifying your experimentation on the relatively fast supervised methods appears odd. If possible please also provide experiments on the BRATS dataset. Maybe even consider participating in the challenge itself if your method remains superior? Overall I see merit in this work and uphold my rating leaning to accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for reading our response, providing additional feedback, and upholding the positive rating. Since LVM-Med proposes novel pre-trained models, it is essential to assess their performance in downstream tasks with pure settings, i.e., avoiding adding extra pre-processing or increasing training data by data augmentation. Otherwise, it is difficult to justify whether improved performance comes from pre-trained models or the number of increased training instances. In most conducted experiments, we tried to examine this factor either with segmentation (Table 2,3) or classification (Table 5, linear evaluation and fine-tuning setting). We also validated LVM-Med performance in complex configurations, encompassing diverse factors such as architectures, incorporating supplementary training data or features, etc. For instance, Figure 3 in the main paper shows our performance in diabetic retinopathy grading tasks where we use the DRG-Net network and load our pre-trained model (ResNet-50) to this architecture. The results demonstrate our strategy leads to state-of-the-art records compared to the latest method in this benchmark. Finally, **we employ the computational resources involving 16 A100-GPUs, specifically during the pre-training phase. It is essential to note that these resources are not utilized in the downstream tasks**. For the downstream tasks, we resort to modest GPUs, e.g., a single RTX 3090 with 24GB memory, to load our model and fine-tune it. Such computation costs are equal to other baselines like U-Net, or U-Net ++ (2D Supervised Method baselines), which reasons why we compare those approaches in experiments. For the results of the BraTS dataset using the similar settings as the challenge, we are implementing this and will get back to you when the results are ready. In the meantime, if the reviewer has other questions, we are happy to discuss them.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for the positive and constructive feedback, which we will leverage to improve this work. We are very encouraged that most reviewers agree that our efforts in **creating such a large medical imaging dataset is important and needed by the community**. Reviewers jRh7, xKHb, and XBHj also appreciate the **LVM-Med algorithms and believe they are novel and interesting**. In addition, Reviewers WEtP, XBHj, xKHb, and yfed acknowledge the **comprehensive and thorough experiments**, which show that LVM-Med consistently **outperforms several state-of-the-art self-supervised algorithms and foundation models**. There are a few shared concerns among Reviewers WEtP and jRh7, such as missing details for some of the experiments and improving the introduction by discussing additional relevant work (Reviewer yfed). We appreciate your feedback and will take into account these points to improve the manuscript. For instance, missing information will be added to **Section 4.1 Implementation Details - Downstream Tasks** (main paper) and the corresponding sections in the **Appendix**. A part of the Introduction and Related Work sections will be revised based on the references provided by Reviewer yfed. Below, we address specific concerns raised by individual reviewers in detail.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a large scale medical imaging dataset consisting of 1.3M medical images from 55 publicly available datasets along with a new contrastive learning framework based on graph matching. Specifically, the model is firstly pre-trained on the collected dataset, and then finetuned towards different downstream tasks, improvement is observed a cross different datasets and settings. Strengths: + A large scale *medical* dataset for pre-training purposes + Well-written and easy to follow + Extensive experiments and good results Weaknesses: - The paper did not provide enough details regarding the proposed datasets, especially in terms of the usage of the dataset, which, in my humble opinion, is pretty important to provide guidance on training medical models on large-scale datasets, e.g., 1. how are 3D data used, are they sliced into 2D data firstly? 2. The data comes from different datasets and maybe in different modalities, any balancing strategy during pre-training? 3. Any augmentations besides multi-crop? (e.g., flip, rotate, color jittering etc). In short, I expect the author provide more details regarding how they utilize the dataset - I appreciate the author provided dataset statistics details in the supplementary, yet I am curious how are these datasets selected and have the author tried any filtering/curation? As data curation has been considered very important in modern large-scale model training [1], it would be great to have some insights in the medical area. [1] DataComp: In search of the next generation of multimodal datasets Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive feedback! **Question 1: Provide more details regarding the usage of the datasets.** **1.a: How are 3D data used, are they sliced into 2D data first?** Yes, for 3D volume data, we slice them into 2D images first. We mentioned this in **Section E. Dataset overviews** in the Appendix and will emphasize it more clearly in the revised version. &nbsp; &nbsp; **1.b: The data comes from different datasets and maybe in different modalities; any balancing strategy during pre-training?** We did a statistical summary of the collected datasets (in Figure 5, Appendix), and the results indicate an imbalance between data modalities. To address this imbalance, we used the following strategies. 1. In each mini-batch during pretraining, we randomly select a subset of available modalities, e.g., color image, X-ray, and MRI. 2. To balance samples in each modality, we combine over-sampling and data augmentation to increase the total samples. Specifically, new samples from minority classes are generated by duplicating images and applying random crop operations covering 85-95% of image regions and then rescaling them to the original resolutions. Note that these augmentations are not used in the self-supervised algorithm (operations s, t ~ T) to avoid generating identical distorted versions in this sampling procedure. We will further discuss this sampling procedure in Section E. Dataset overviews. &nbsp; &nbsp; **1.c: Any augmentations besides multi-crop? (e.g., flip, rotate, color jittering, etc.)** For the augmentations, we mainly follow prior work [29] that emphasizes the importance of multi-crop augmentation. Though, in the implementation, the method is combined with other widely used operations. In particular, what we use is multi-crop $\rightarrow$ flip (probability 50%) $\rightarrow$ color jilter $\rightarrow$ random Gaussian blur $\rightarrow$ normalization. We will add more details in *Section 4.1 Implementation details* [29] Vicregl: Self-supervised learning of local visual features &nbsp; &nbsp; **Question 2: How are these datasets selected? Is any filtering/curation used?** In this work, we focus on building a **large-scale model for medical images which only uses raw data without any supervised signals** (e.g., segmentation masks or classification labels). This is essentially different from other multi-modal data where image-text pairs are used as inputs for the algorithm, necessitating careful data curation to prevent skewed or inaccurate information. With this characteristic in mind, we employ specific criteria to pick datasets, outlined as follows: 1. Collecting **as many public datasets as possible** whose number of samples is not too small (at least 100 images). 2. The selected dataset represents **diverse data modalities** (including X-ray, CT, MRI, color images, ultrasound, etc). 3. The selected data represents **diverse body organs** such as lungs, cells, brains, breasts, etc. To define a set of potential datasets, we carefully survey from different sources such as competition challenges (e.g., grand-challenge), papers, or benchmarks in medical-related conferences/journals. The collected datasets are then sampled with 20% images if training/testing splittings are unavailable; otherwise, we use all samples in the training set to avoid potential testing data leaking, as discussed in the paper. We hope our response resolved most of your concerns, and helped you evaluate our work more positively. If you have other comments, we are happy to address them in the reviewer-author discussion period. --- Rebuttal Comment 1.1: Comment: Thanks for providing the rebuttal, most of my concerns are addressed. I appreciate the author's efforts in exploring large-scale dataset for medical imaging analysis. I increase my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for reading our response and increasing the rating. We will incorporate your valuable suggestions into our next revision.
null
null
null
null
null
null
Probabilistic inverse optimal control for non-linear partially observable systems disentangles perceptual uncertainty and behavioral costs
Accept (poster)
Summary: This paper proposes an algorithm for inverse optimal control for agents operating in a partially observable Markov decision process, using only state trajectories. Given state trajectory data, dynamics model and observation model, the algorithm estimates the parameters by three steps: first, a policy is estimated using iLQG; second, the belief dynamics is estimated with EKF; finally, filtering is done by linearizing belief propagation. The likelihood is then maximized by back-propagation. The authors tested the proposed algorithm on several synthetic problems and compare it with a baseline based on maximum causal entropy. The results show that the proposed approach can better estimate the unknown parameters from behaviors than the baseline. Strengths: The problem studied here is relevant to community. I think especially that the authors consider the case without action information, which makes the setup more practical. The writing of the paper is easy to follow. The introduction is well motivated and the related work discussion is informative. The proposed idea is clean and is new to my knowledge. The authors discuss in details about the limitation of this work. Weaknesses: While the writing is easy to follow, it is unclear what the assumption is being made to use the proposed method. From Algorithm 1, the authors assume knowledge of the dynamics, observation models, and state trajectory, but the steps of Algorithm 1 requires more information than that, e.g., the cost is needed in iLQG. In addition, it is also unclear how the proposed algorithm actually operates; the authors describes how filtering and approximate likelihood (given all the parameters) is computed in details, but do not mention how the unknown parameters are actually estimated (the authors mention "using gradient-based optimization" with automatic differentiation, but it is unclear what the computational graph entails). As a result, while I get the high level idea, I do not think I fully understand the working details of the proposed scheme, and therefore it is hard to give it an accurate evaluation. I think the authors should compare more with recent IOC or IRL works, in the experiments, and consider more realistic datasets (rather than synthetic ones). In addition, it would be more informative if the authors can better highlight their contribution in terms of the specific problem difficulties here (e.g., due to missing action). Lastly, the proposed method is limited to low dim problems as the authors also point out. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the assumption of known information made in the paper? It is not clearly specified in the current writing. From Algorithm 1, it seems that assuming only the dynamics and observation models, as well as state data trajectories. But in iLQG the cost is also needed, and in the EKF step the belief dynamics (i.e., beta_t) is needed. What are they in the experiments? The algorithm also uses the prior of belief. Is that assumed to be given too? 2. How does the learning actually work? The paper mentions the usage of automatic differentiation. But how is that actually done and what is the computation graph involved? Algorithm 1 requires multiple linearization steps in iLQG, EKF, and the filtering. In particular, iLQG is an iterative process, do you also backpropagate through them? 3. While the paper supposes only state trajectories are given, it seems to make assumption on the knowledge of the belief dynamics, or what the belief of the agent is. It is not clear whether this is a stronger or a weaker assumption. 4. The current experiments only consider rather small and synthetic problems. Can you include results on some more realistic datasets? In addition, what are the agents that generate the data in the experiment (I can only tell iLQG is used for the light-dark domain)? It would be also good to test the proposed algorithm the estimation of non-iLQG agents, since the proposed one is based on iLQG, so that we can know the generality of the proposal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes, the authors made clear what most limitations are. It would be also good to point out the limitation due to limited empirical evaluations done in the paper (e.g., due to the synthetic, low-dim nature, the agent policy's form, the parametric structure, etc.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the generally positive assessment of our work. Below, we expand on some of the assumptions of our method. The questions raised about these assumptions will help us improve the clarity of our paper. Regarding the weakness point that a comparison with “more with recent IOC or IRL works” might be helpful, we want to highlight that the baseline we consider represents current state-of-the-art methods. Newer variants use mainly more expressive (deep) function approximations, which can learn more complicated cost functions in higher-dimensional spaces and with potentially unknown dynamics. For the setting we consider in our work, these methods, however, do not yield any advantage over the considered baseline let alone that they do not provide easily interpretable parameters that can easily be connected back to psychologically meaningful quantities. Questions: [Q1+Q2 What are the assumptions and how does learning work?] For our method, we assume that we are given a parametric form of the system dynamics, belief dynamics, cost function and initial belief of the agent which all might depend on unknown parameters that we would like to infer. Further we assume a set of state trajectories. You are right that for evaluating the likelihood (and in iLQG), one needs the values of the parameters, e.g., for the cost function. In our IOC method, the goal is to maximize the likelihood w.r.t. these parameters. To do so, we use a gradient-based optimization approach, i.e., we start with some initial parameters and differentiate using automatic differentiation through the likelihood computation (Algorithm 1). It is true that the iterative nature of iLQG significantly complicates the computation graph, because, naively, one would have to backpropagate through the iterative procedure, as correctly stated by the reviewer. We get around this by only linearizing once around the actually observed state trajectory (see Section 3.3), making the computation of the likelihood function and its gradients much more efficient. We then use the computed gradient to adjust our previous estimate in the direction of the optimal one. [Q3 Knowledge of agent’s belief] We would like to highlight again, that we do not assume knowledge of the agent’s belief. As shown in Fig. 1, the agent’s belief is an unobserved variable from the researcher’s perspective. What we assume is that there is a parametric model that describes how the agent’s beliefs are generated from the previous belief and observation (i.e. the belief dynamics). The beliefs themselves remain latent and need to be marginalized over using our probabilistic belief tracking formulation. Therefore, this is a weaker assumption than requiring the beliefs (which are internal to the agent in general) to be known. [Q4 Small and synthetic experiments] We agree with the reviewer that our method does not focus on high-dimensional problems, but we would like to emphasize that the tasks we consider are highly relevant in cognitive science and motor control, where relatively low-dimensional models are adequate for moving from controlled experiments towards naturalistic behavior. For example, the non-linear reaching model we consider (or similar models) has been widely applied in the neuroscience literature for modeling reaching data [Liu & Todorov, 2007; Knill et al., 2011; Wochner et al., 2020] and constitutes therefore a “realistic” task. Similar optimal control models have been employed in human locomotion [Papadopoulos et al., 2016] navigation [Kessler et al., 2022] or ball catching [Belousov et al., 2016]. [Q4 Non-iLQG agents] The probabilistic formulation of the IOC problem (Section 3.1) is sufficiently general that it should be applicable to a broad class of agents beyond iLQG. In this paper, we evaluate it for iLQG agents, as (1) it allows efficient computation via local linearization (Section 3.2) and (2) many problems relevant in cognitive science and sensorimotor neuroscience can be solved via iLQG. For efficient and successful application with non-iLQG agents we would assume that further adaptation is necessary work and we therefore leave it as future work. However, we are convinced that laying out this conceptual framework is already a substantial contribution to the cognitive science and sensorimotor neuroscience community. We will make sure to include the mentioned limitation into our discussion in the revised version of the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. They address my concerns.
Summary: The paper introduces a new approach to inverse optimal control which is able to deal with partially observable systems in which action signals are not known. Most existing approaches only work in fully observable systems where actions are known. The paper introduces a probabilistic formulation for inverse optimal control and uses maximum likelihood to estimate the costs and parameters of the system. To make the likelihood tractable, they use local linearization similar to iLQG. The paper tests the approach on 4 tasks and shows improved performance over a maximum causal entropy-based baseline. Strengths: I am not familiar with this area of research and most of the concepts in the paper are new to me but I was still able to follow most of concepts introduced in the paper hence I think the writing is mostly coherent and fairly easy to follow. Weaknesses: I don't have any weaknesses to point, I mainly have questions which I will write below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) My first question is about baselines. The authors use a maximum causal entropy-based baseline for the experiments. From my understanding, maximum causal entropy is mainly used for exploration in RL but it is not clear to me what is the IOC-based method used in MCE to estimate the cost function. Does it use something like ILQG or EKF? 2) What functions are used to model f and \beta in equation 2? From my understanding, the functional form should be known beforehand which would be quite limiting for a setting where this information is not known. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed all relevant limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for rating our paper as easy to follow, even for a reader outside this specific area of research! The raised questions, which we answer below, will help us improve the accessibility of our paper even further. As the reviewer stated, that he has limited familiarity with the field, we would like to refer to the significance statement that we included at the beginning of this rebuttal. Questions: [Q1: Baselines] Yes, that is correct, MCE controllers add variability to the actions and are therefore often used for exploration. For the inverse problem, one usually assumes that the expert is not perfectly accurate and therefore an MCE controller is often used to model the variability of the agent. The MCE formulation for the inverse problem is also of a suitable mathematical form, making it efficiently applicable. As the IOC problem based on MCE cannot be solved exactly, there have been various approximations introduced, partly based on linearization [e.g., Levine and Koltun, 2012], making the application with iLQG straightforward. Specifically, in contrast to our method, there is no explicit model of partial observability, and action signals are assumed to be known. There has been very little work on IOC for partially-observable systems (see related work) and to the best of our knowledge, the EKF has not been used with the MCE formulation in IOC. Thanks for raising this point about the baseline, which we will clarify in the revised version. [Q2: Functional form of (belief) dynamics] It is true that the functional form of the system / belief dynamics is assumed to be known beforehand. This is typical in the application domains we have in mind, i.e. cognitive science and neuroscience, where researchers have quite accurate models of the sensorimotor system and are interested in inferring parameters of this system (e.g. costs, uncertainties, etc.). While more general function approximators, e.g. neural networks, which are also parametric models (albeit with more parameters) could be used for the dynamics, we are interested in examples with an interpretable (and therefore often low-dimensional) parameter space.
Summary: The authors propose an inverse optimal control method to handle the challenging case of inferring an internal model in a non-linear partially observable system, when the action sequence is not observed. Quantitative evaluation of the proposed method was shown for some classic control problems. The authors also demonstrated through an example the potential of their method in disentangling perceptual factors and behavioral costs. Strengths: The written language is easy to understand, and well organized. This work is a novel combination of ideas from some well-established techniques. The method should be of particular interest since intended actions of other agents are often only partially observable.

 The value of the method is highlighted by a comparison with an alternative method that mistakenly attributes perceptual uncertainty to subjective preferences. The authors’ IOC is able to correctly infer that much of the action in this task is driven by exploration rather than exploitation. Weaknesses: It would be helpful to highlight more strongly that they can distinguish preferences from uncertainty. The authors rightly mention this virtue in the title, so I think it deserves more attention in the text. On the other hand, for that section, I would say that stronger evidence is needed, for example on a more complex task than the light and dark one. Uncertainty and cost are sometimes nonidentifiable: It could be too costly to correct a certain mistake, or it could be too uncertain to justify an optimal action, and these effects can be indistinguishable from the outside. However, when the uncertainty is dynamic while the cost function is static, these factors are dissociable. This should be addressed by the authors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors describe their method as uniquely accounting for unobserved actions. But other Inverse Optimal Control methods allow for stochastic policies [e.g. Wu, Kwon et al 2020], in which actions can equivalently be interpreted as a latent intended action distorted by noise, or as a draw from a known policy. Aren’t these equivalent if the policy is optimized with action noise? The authors should show a sanity check that their linearization successfully allows us to estimate actions near enough the ground truth. It would be helpful to discuss more of the limitations from combining iLQG with EKF. Do the authors allow for controls that depend on a dynamic covariance, or just estimates of the mean? This was a bit unclear from their writing. It’s likely, given the techniques, that their method does not account for controllable uncertainty. Is this correct? I would like to see more evidence for the main claim, as suggested in the title, that the method disentangles perceptual uncertainty and behavioral costs. To show that taking into account the agent’s beliefs helps, it would be helpful to compute how uncertain the internal belief is while the agent moves towards the light before turning towards target, and plotting how it changes with subjective reward parameter c. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have a thoughtful discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review, and specifically for highlighting the importance of disentangling costs from uncertainty. We will use the additional page of the camera-ready version to expand on this in the introduction to make it a more prominent feature of our work. The issue of potential non-identifiability of uncertainty and costs in certain cases is also a good point, that of course applies to each and every algorithm in IRL and IOC. We agree that in general, there can be cases like the one described in the review, where these factors cannot be distinguished. We will address this in the discussion. Our goal was to show that across several example scenarios, including those common in the motor control and cognitive science literature, where the costs and uncertainties can be disentangled, our method is able to do this while the baseline is not. Thus, our approach is a step towards starting to explore these issues of model identifiability. Questions: Unobserved actions (in comparison to Wu, Kwon et al., 2020): In their PNAS paper, Wu et al. (2020) assume both the agent’s observations and the agent’s action signals to be fully known (see their figure 1B, where both the observation and the action are indicated as observed variables from the researcher’s point of view). In their NeurIPS paper, Kwon et al. (2020) assume the action signals to be known, but marginalize over the agent’s internal observations / belief (see their Figure 1 and algorithm 2). So, both of their formulations assume observed action signals, in contrast to our method, and additionally assume a stationary policy. But yes, they also use a stochastic policy in addition to the partial observability formulation - even if it does not correspond to the MCE controller widely used. Sanity check of linearization: Thank you for your suggestion to show “a sanity check that their linearization successfully allows us to estimate actions near enough the ground truth”. Could you elaborate a bit further on what you precisely mean by this? The parameters that are estimated are close to ground truth, see figures 2 & 3 and appendix I, J, and K. The question asked about actions, though. We estimate actions only for determining the linearization points (and for the baseline) by using the non-linear system dynamics (appendix G). We do not claim that these estimates are very accurate as they ignore noise (and the baseline would perform better then) but are sufficiently close to yielding appropriate linearization points. Limitations of combining iLQG with EKF: It is true that the combination of iLQG and EKF results in controls based only on the mean of the belief. However, the partially-observable version of iLQG (Li & Todorov, 2007), which we use here, shows uncertainty-aware behavior (as we show in the light-dark domain), because the state-dependent covariance is taken into account during the computation of the control law. In general, an extension to control methods that are based on the covariance of the belief (e.g. belief-space iLQG, van den Berg et al., 2012) should be possible in our framework, by defining the belief dynamics based on a vector including the covariance. This extension is a fruitful idea for future work and will be added to the discussion. [More evidence that the method disentangles uncertainty and costs] Thanks for raising this point. We should have been clearer about the experiment in the light-dark domain. To illustrate sources of information-seeking behavior, we have created an additional figure that shows the agent’s belief for varying cost parameters (Figure 1 in the additional pdf). An agent who takes uncertainty of the belief into account during computation of the control law (e.g. partially observable iLQG; Li & Todorov, 2007) will move towards the light source before approaching the target. Adding another cost term (c) that expresses a preference to be close to the light source does not change much about this behavior (top row). An agent who computes the control law irrespective of the belief uncertainty (e.g. fully observable iLQG; Todorov & Li, 2005) will not move towards the light source per default. As a result, the belief uncertainty is higher. Only when we add an extra term in the cost function (c) to force the agent to go to the light source does the agent move to the right before approaching the target. These two sources of information-seeking behavior can lead to similar trajectories. To show that we can disentangle these factors, we simulated 100 random parameter sets (varying the target position $p$, perceptual uncertainty $\sigma$ and cost $c$). In Appendix K.4, we show that our method can infer all of these parameters, while the baseline cannot infer the perceptual uncertainty. We will add the aggregated results of the light-dark problem to Fig. 3 of the final version of the paper.
Summary: This paper targets for solving the inverse optimal control problem for partially-observable stochastic non-linear dynamics with no observation of the action. To estimate the parameters of the cost function in a stochastic non-linear system, the author first derives a likelihood function for the model parameters. They then approximate this likelihood by locally linearizing the system using a combination of iterative linear quadratic Gaussian (iLQG) and Extended Kalman filter (EKF) techniques. The proposed inverse method makes explicit assumptions about the dynamics of the control tasks, the agent's belief in a stochastic environment, and the structure of the cost function. The authors evaluate their approach on four simulation environments where structured noise is added to the dynamics of the environment to introduce stochasticity into the system dynamics. In comparison to the baseline, the proposed approach demonstrates better uncertainty estimation. Strengths: In general, I find this paper easy to follow and variables in the equations are well defined. The derivations are nicely structured. The details are well documented in the Appendix and the assumptions and limitations are explicitly considered. Weaknesses: I suggest the author clearly define their contribution in this paper, as the likelihood formulation of the inverse optimal control problem has been derived in another work, \textit{Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System}. To enhance clarity, I recommend placing the likelihood formulation in the background section. Claiming that their approach can disentangle perceptual uncertainty and behavior costs might be too bold for two reasons: firstly, the author demonstrates this claim in a single simulation environment only, and secondly, the structure of perception uncertainty and behavior cost function is known. However, in reality, obtaining the structure of these elements is difficult. My major concern pertains to the number of assumptions made in different aspects of the system to obtain the proposed closed-form solution. For instance, all noises added in the experiments are assumed to be Gaussian distributed. These assumptions render the proposed approach challenging to apply in real-world scenarios, and the experiments were solely conducted in simulations. In addition, potential solutions to these assumptions i.e. addressing multi-modals belief utilizing SMC, are only briefly mentioned in the conclusion section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the reaching task and the navigation task, why is the goal position not inferred? In the conclusion, the authors mentioned that their method may not be effective for high-dimensional parameter spaces. It would be beneficial for the community to know the accuracy of the uncertainty estimation as the number and dimension of parameters scale. In the sensorimotor domain, do you always have a well-structured cost function that you can obtain beforehand? I.e. I think your approach would not work if the formulation of the behavior cost function like eq.8 is not given. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are well documented in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the generally favorable evaluation of our paper. Weaknesses: Contribution: Yes, the likelihood formulation has previously been derived for linear systems in that paper. Here, we write it in a more general form, which is applicable beyond linear systems, which is why we kept it in Section 3. Importantly, we derive new algorithms to actually carry out the inference for the case of nonlinear dynamics, nonlinear observations, including much more general noise distributions. This was previously unavailable. Disentangling uncertainty and costs: One cannot build scientific models without making assumptions. For example, without any knowledge about parametrizations of perceptual uncertainty and parametrizations of the cost function, it is not possible to unambiguously identify these. We completely agree that obtaining the structure of perceptual uncertainty and behavioral cost from behavior is a very difficult problem. In fact, our current work is to our knowledge the only one for the case of nonlinear dynamics, nonlinear observations, interesting noises and unobserved actions. Therefore, we argue that explicitly stating the assumptions, under which conclusions about these factors are drawn, is a good thing. Here, we provide a framework for spelling out such assumptions in the form of parametric models and inferring their parameters. In many tasks in cognitive science and neuroscience, one has knowledge about some part of the model, e.g., the system dynamics or belief dynamics up to certain parameters but other quantities, e.g., costs and perceptual uncertainties are latent. Assumptions about the system: It is true that the noises v_t and w_t that go into the dynamics and observation function are Gaussian. However, they are transformed within these functions, resulting in non-linear signal-dependent noise models. This assumption is motivated by what is known about the sensorimotor system (e.g. Harris & Wolpert, 1995; Todorov and Jordan, 2002). Thus, very complex noise distributions can be handled by the current model. Questions: Why is the goal position not inferred? In reaching and navigation tasks, the goal position is usually known to both the agent and the experimenter, which is why there is no need to infer it. The (internal) cost and perceptual uncertainty, on the other hand, can usually not be set or determined by the experimenter. Further, we aimed to keep the parameters consistent across tasks, and therefore used a similar parameterization for all of them. In principle, the goal position could be inferred easily using our method as long as the problem remains uniquely identifiable. Scaling with number of parameters: Thank you for your suggestion, this would be indeed interesting. However, we do not believe that there is a general scaling law, as the estimation depends on the specifics of the problem more than on the mere number of parameters. Structure of the cost function: In a motor control experiment, researchers typically have a good idea of the task being performed by the agent and potentially other factors influencing performance (cognitive, biomechanical effort costs, etc.), which can be formalized as parameters of a cost function. For example, the costs of movements may depend on path length, accelerations, or torques of movements. If such a parametric cost function is not available, one would have to resort to more general function approximation methods, which can be more challenging to render interpretable. --- Rebuttal Comment 1.1: Comment: Thank you for the comments. After reviewing your response to reviewer V7nV, the assumptions made in your approach are much clearer to me. I suggest that you also include the statement about these assumptions in your revision. It looks to me your approach could be beneficial to your particular domain while given somewhat limited contribution as mentioned by other reviewers, so I will update my score to 5.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their generally positive reviews. In light of some of the questions and comments, we would like to clarify the current state of inverse reinforcement learning (IRL), inverse optimal control (IOC) when applied to human or animal behavior. Looking at the publications at Neurips from recent years, IRL methods have been published that are targeted at inferring animal behavior, which, however completely exclude perception (partial observability due to sensory uncertainty from the perspective of the agent), assume the actions to be fully observed (the action observed by the researcher may not be identical to the planned and intended movement and the control signal, e.g. acceleration, is not directly measure, e.g. position instead). Thus, this IRL work omits explicitly modeling perception, i.e. sensory uncertainty. Accordingly, this does not allow to estimate an observation function or perceptual noise and there is accordingly not even a notion of epistemic versus pragmatic actions. In recent IOC work, methods typically work in deterministic, fully-observable settings and assume the agent’s action signals to be observed (i.e. most problems implemented in OpenAI gym and comparable frameworks). There is IOC work that explicitly assumes partial observability from the perspective of the agent, i.e. involving sensory noise. However, to the best of our knowledge, there is no current method that can accommodate the setting considered here, i.e. problems that are stochastic, partially observable (both the partial observability introduced by perception from the perspective of the agent as well as the partial observability of the true intended actions by the agent from the perspective of the researcher), non-Gaussian noise, which is modeled by passing Gaussian noise through the nonlinear functions describing dynamics and observations. We are motivated by problems relevant to neuroscience, motor control, cognitive science, and psychology, where biological systems are often modeled as having noisy sensory and motor systems, resulting in stochastic, partially-observable problems with unobserved action signals. For more naturalistic, sequential tasks, that are the frontier in current research, intrinsic beliefs and subjective costs are unknown. Moreover, when considering limb kinematics or muscle activations, dynamics are nonlinear, but usually well characterized, e.g. derived from first principles such as kinematics or measured empirically in separate experiments. Sensory uncertainty also involves nonlinearities, e.g. visual angles. Additionally, sensory noise and action noise are overwhelmingly non-Gaussian but well characterized, e.g. signal-dependent noise. This complicates the inverse optimal control problem tremendously. Here, we present a method that is applicable in this setting. To our knowledge, it is the first such method applicable to this setting and able to distinguish between epistemic and pragmatic actions. Therefore, we see broad applicability and utility in the present work from neuroscience to motor control and beyond. Pdf: /pdf/8a0c8db853f2dddf42cb0d5ab4558bf1063e9b6e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a probabilistic approach to inverse optimal control for partially-observable stochastic non-linear systems with unobserved action signals. It derives an approximate likelihood function for the model parameters by linearizing the system around the observed trajectories and tracking the agent's belief distribution. This method is demonstrated to accurately infer the parameters better than the baseline maximum causal entropy (MCE) approach on two classic control tasks and two human behavioral tasks. Additionally, it shows the ability to disentangle the influences of perceptual uncertainty and behavioral costs on information-seeking behavior sources of information-seeking behavior. Strengths: - The introduction gives a clear background, which is helpful for someone like me who is less familiar with this field. It provides a clear understanding of the research question and the existing approaches. - This paper proposes a new probabilistic approach for inverse optimal control in stochastic non-linear systems with missing control signals and partial observability, which outperforms the baseline MCE model and also provides an interpretable representation of the parameter space. - The proposed method is evaluated on different tasks, including two human behavioral tasks, which might inspire further investigation on how the method can be applied to neuroscience and cognitive science studies. Weaknesses: A more comprehensive investigation into the method's robustness under various conditions would strengthen the paper's contributions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Why is it necessary to assess the proposed method's performance on pendulum and cart pole tasks, despite their deterministic nature and lack of partial observability? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately acknowledged and addressed the limitations of their methods, which include the following: (1) The focus on tasks that can be well-solved by control methods based on linearization and Gaussian approximation (iLQG and EKF), which may not fully capture the complexity of naturalistic behavior, and (2) the concern that the method might not scale effectively to high-dimensional parameter spaces as optimization in a high-dimensional non-linear space can potentially lead to getting stuck in local minima. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive assessment of our work. More comprehensive investigation into the method’s robustness: We agree that, as always, there is also in this manuscript room for even more evaluations. Figure 3 gives evaluations involving 100 datasets each for the considered problems. Additionally, appendix I, J, and K provide more evaluations. Evaluation on pendulum and cart pole: While the pendulum and cart pole tasks are deterministic and fully observable in their standard versions, we have implemented stochastic and partially observable versions of these tasks. While our main focus of our method is the application in cognitive science and motor science, we used these tasks to show that our method is applicable to this domain but can find applications also in other fields, such as robotics, where partial observability and stochasticity might also play a role. We also agree that in principle much more naturalistic tasks are conceivable and indeed, over the last 15 years we have continuously contributed to establishing more naturalistic tasks in the field. However, over the last few years, probabilistic methods have found wide acclaim, including at NeurIPS, that do dynamic state inference for tasks that are more naturalistic but not fully unconstrained, such as navigation. Here, we provide not only inference of the belief state but additionally inference of the costs, and the uncertainties, which includes disambiguating pragmatic from epistemic actions. Thus, we think that this is a significant contribution to the field, it extends current analyses, and should be widely applicable in the field. If these are the only two concerns with our paper, we would like to ask the reviewer politely whether increasing the score is appropriate.
Summary: This paper is a strong contribution on the Inverse Optimal Control problem when actions cannot be observed. It is particularly interesting their modelling of the agent and the “researcher” observer. The mathematical depth is good and sound. The results may be enough for a theoretical paper. However, there are too many unknowns to be ready for acceptance, particularly some definitions like partial observability related to noise. Another issue is the clarity regarding the baseline used, is it self-programmed?, is it taken from previous literature?. It seems that it does not work for any of the problems tested. Strengths: - The mathematical depth is good. - Inverse optimal control plus noise estimation is relevant for many applications; especially for motor control research. - Results may be enough for a theoretical paper. Weaknesses: - Narrative: While the contribution looks very promising, the explanation of the contribution in the introduction is not enough clear to understand what is the main focus of the paper. Furthermore, there is a need to jump forward and backward in the text to fully understand the details of the approach. Particularly, it is sometimes complicated to see if the authors are talking about the agent or the observer (“researcher”). Another example: I had to read the whole article to understand the title. - The partial-observability may be controversial as it is defined. The idea of tracking the belief of the agent is interesting, but why not using directly the observed state? So then is it the noise that the algorithm is tackling as partial observability. Thus, then the complexity depends on the type of noise. - Following the previous comment, while the method may be used for any parameter estimation (as authors state) why only estimating the observation/motor noise? Btw, observation noise of the agent or the observer? From the text I assumed that the authors were doing system identification at the same time. But they are estimating the noise. - Baseline explanation needs further description for the sake of clarity. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - The description of the variables v_t and w_t could be improved - Why there is a final cost that is independent on the action taken? Conversely to the previous time steps. - What is πt(xt) ? (section 2.2) - Why stochastic policy has a capital Pi? - Not clear why the EKF is introduced. - “In the inverse optimal control problem, the goal is to estimate parameters θ ∈ Rp of the agent’s optimal control problem given the model and trajectory data. These parameters can include properties of the agent’s cost function, the sensory and control systems of the agent, or the system’s dynamics.” This is a definition proposed by the authors. Originally, IOC is to recover the cost function from expert demonstrations. Getting system parameters is called system identification. - “cost of final velocity cv” Is this related to the cost definition with the final cost at end time? - “observation noise” of what quantity? Experimenter, agent? - “Note that past methods based on MCE are usually limited to estimating cost functions, so that parameters such as the agent’s noise therefore cannot be inferred.” This sentence helps a lot to understand the authors approach. - It is not clear why the dynamics of the agent state and its beliefs only depends on the state and the belief: “closed-form expression for p(xt+1, bt+1 | xt, bt).” - “we applied a baseline method based on the maximum causal entropy (MCE) approach [3].” Which method? Self-programmed or taken from the literature? Baseline text could be improved. In the end I am not sure how many baselines are being used. - “expressed as a non-quadratic cost function of the joint angles” Is this really true. Besides, while I understand that there is not enough space for everything on the paper, the description of the input output system would be nice to place it in the main paper so there is no need to go to the appendix to read that the arm is controlled with torque. - It is not clear why the baseline cannot even recover the action cost properly. Would it work in the absence of noise? - The agent navigation dynamics can be simplified to use the turn rate as the control action: [dot{x}, dot{y}, dot{theta}]¨= [cos(theta)w, sin(theta)w, u] Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Limitations are sufficiently addressed. My only constructive suggestion is to mention that noise covariance estimation for non white-noise sources is a very important problem in control. So I would include it in the discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the strong theoretical contribution of our work. The many detailed questions, which we answer below, will help us improve the clarity of our paper. [Noise variables] The noise variables $v_t$ and $w_t$ are both standard Gaussian random vectors. However, this does not mean that only additive Gaussian noise is possible. These variables can be subject to potentially non-linear transformations in the dynamics and observation functions, resulting, e.g., in signal-dependent noise models. Therefore, this accommodates both Weber-Fechner-type sensory uncertainty in humans and animals as well as signal-dependent noise of motor actions in humans and animals. [Final cost] This is a standard formulation in finite-horizon control problems (see e.g. Wikipedia on LQG). For time 1 up to T-1, the agent takes an action and receives a reward based on the current state and the action. At time T, the agent reaches the final state and can no longer perform actions. Therefore, the final cost depends only on the final state. [$\pi_t(x_t)$] This is the agent’s policy function, which maps the current (estimate of the) state to the action. Note that later in the paper, we use a probabilistic policy, so that $\pi_t$ also depends on a noise variable. [Capital pi] For the stochastic policy, we denote with $\pi_t$ the policy function, depending besides the belief also on the noises. We use $\Pi_t$ to denote the resulting policy distribution, just depending on the belief. [EKF] The agent has sensory uncertainty about the true state of the world (partially observable MDP), different from the RL setting. We use the EKF for modeling the belief dynamics of the agent, so it is used in our algorithm. Please refer to Section 3.2 to see how the concrete choices for belief dynamics (EKF) and policy (iLQG) are used in our IOC algorithm. [IOC vs. SI] You are right that commonly IOC is used for inferring parameters of the cost function given behavioral trajectory data. System identification, on the other hand, usually denotes the whole process of learning the dynamics of the system and includes determining controls for the data generation as well. Note, however, that in motor control, neuroscience, and cognitive science, the system is usually well characterized, e.g. muscle dynamics or the kinematic chain of a lib. As we consider the task of inferring the parameters of the system given a fixed set of trajectories, from our point of view, this corresponds best to the setting of IOC, which is why we labeled it this way. [Final velocity]: In our considered tasks, the system should be controlled to have close to zero velocity at the final time step. The parameter $c_v$ determines how high the cost for the velocity penalty at the final time step is. More details and precise formulas of the cost function parameterizations are provided in appendix J. [Obs. noise] We assume that the researcher can observe the state and the agent has noisy/partial state observations (Fig. 1 / section 3). Thus, the observation noise is regarding the agent. [State and belief dynamics] Generally, the state $x_{t+1} = f(x_t, u_t, v_t)$ in a POMDP can depend on all previous observations of the agent $y_{1:t-1}$, because the agent can choose an action based on all prior observations. One can simplify the problem by assuming that the agent’s action $u_t = \pi(b_t, \xi_t)$ is a function of the agent’s current belief (and perhaps some noise). The belief, in turn, is a function of the previous belief and observation $b_t = \beta(b_{t-1}, y_{t-1})$. This allows us to write a system of states and beliefs, which only depends on the previous state and belief. We adapted this idea from van den Berg et al. (2011). [Which baseline] As previously stated, more details of the baseline can be found in appendix B. We are not aware of any previous approach that shares the same setting, i.e. partial observability of states and actions, and therefore, past implementations cannot be directly applied to the setting we consider (e.g. finite time horizon). Accordingly, we implemented the baseline ourselves. As there exist various approaches on how to choose a feasible approximation, we show that our implemented baseline works for the regarded tasks in the usual setting of IOC (fully-observability and given control signals) in appendix K3. [Cost] Good point, we should have mentioned that the reaching task is torque-controlled, which considerably complicates both the control and the IOC. Because the representation of the state consists of elbow and shoulder angles and the cost function depends quadratically on the hand position, which is a non-linear function of the angles, the cost is a non-quadratic function of the state. We will use the additional page of the camera-ready version to clarify this by expanding the description of the reaching task in the main text and adding some of the equations from the Appendix. [Baseline cost] The main reason why the baseline cannot recover the cost is that it assumes the agent’s action signals are known, which is essentially the assumption of all previous methods that we are aware of. When we provide the baseline with the action signals, it can recover cost parameters, but still struggles with noise parameters (see Appendix K.3). In the absence of noise, the baseline would work well, as the true actions could be exactly estimated based on the states. [Agent navigation dynamics] We politely disagree with the reviewer that the agent navigation dynamics can be simplified to the proposed form. The proposed model is actually very similar to the dynamics model we are using, but we additionally allow the agent to change the forward velocity $w$ by adding an acceleration to it, which is why we include it in the state. Please also keep in mind that we are using discrete time (difference) equations instead of continuous time (differential) equations, which also account for a difference in notation. --- Rebuttal Comment 1.1: Title: Additional comments on weaknesses Comment: [Explanation of contribution] To summarize our contribution in one sentence, we introduce an inverse optimal control method that can deal with partially observable stochastic systems, where the agent’s action signals are unknown to the researcher. This requires distinguishing between the control problem from the agent’s viewpoint (Fig. 1A) and the inference problem from the researcher’s viewpoint (Fig. 1B). We have tried to be precise in our wording and always clearly distinguish these two perspectives, but we are happy to elaborate if any concrete passages in the paper are unclear. As for the title, we think that the title both captures the problem setting (stochastic partially observable systems) and the unique selling point (disentangling perceptual uncertainty and behavioral costs). See also the answer to all reviewers at the beginning of this rebuttal. [Partial observability] In IOC we take the perspective of the researcher, who observes a trajectory of states $x_t$, but does not have access to the agent’s noisy observations $y_t$ (see Fig. 1 in the paper). The agent cannot directly observe the true states $x_t$, but instead receives noisy / partial observations $y_t$. Based on these observations, the agent forms a belief state $b_t$ (e.g. using the EKF). We as researchers cannot observe this belief state, because it is a quantity internal to the agent. The likelihood $p(x_1:T)$ depends on the true states $x_t$, which are observed by the researcher. To compute this likelihood, the unobserved internal variables of the agent need to be marginalized out. Although this way of formalizing the IOC problem might be novel, we do not think it is controversial. [Observation noise] We infer parameters of the cost function and of the noise model. For inferring the parameters correctly, the problem still needs to be uniquely identifiable. For the regarded problems, it is therefore not possible to do full system identification while estimating the cost function, as the problem quickly becomes highly unidentifiable. We therefore limit our evaluation to problems with few interpretable parameters that are expected to be uniquely identifiable. Experiments in cognitive and motor science are usually designed this way. Importantly, dynamics in these settings are usually well characterized, e.g. derived from first principles such as kinematics or measured empirically in separate experiments. [Baseline description] For the sake of readability, we preferred to have a rather short intuitive explanation of the baseline in the main text. In Appendix B, there is a detailed and more formal description, which should contain all necessary descriptions. If there are any further concrete questions about the baseline, we are happy to answer them and improve our description. --- Rebuttal Comment 1.2: Title: Thank you for the clarification Comment: Thanks for the comments. I really think that the paper can be improve the clarity of the presentation. Particularly, being clear about partial observability vs noise so the experiments contribution is clear. I would reread the paper in detail with the new comments and make my final evaluation.
Summary: This paper presents the new formalization of inverse optimal control on partially-overevable Markov decision processes. The authors argue that most existing works on inverse control or inverse reinforcement learning focus on fully-observable Markov decision processes. Their approach extends iterative linear quadratic Gaussian (iLQG) and Maximum causal entropy (MCE) reinforcement learning, introducing local mineralization to achieve tractable likelihood. The method is evaluated through simulations. Strengths: + The formulation and derivation of the method are solid and well-motivated. *Weaknesses *Limitations Weaknesses: - The derived method seems natural, making it challenging to identify the novelty of the proposal. - The evaluation lacks sufficient quantitative and qualitative analysis, as it only covers simple settings and compares with the MCE approach ~~without using the proposed method~~. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Considering the numerous studies on inverse reinforcement learning and inverse optimal control without the assumption of linearization of the dynamical system and Gaussian approximations, a justification is needed in terms of applicability to robot learning. A comparison with methods without such assumptions should be included. Can the method handle more dynamic tasks such as walking? 2. Clarify the difficulty of the problem in a more intuitive way to explain why this problem remains unsolved. The formulation is somewhat straightforward. Therefore, potential readers may wonder why the problem is still unsolved. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This is just a suggestion. Regarding POMDP in robot learning, exploring the relationship with world models (e.g., [1,2]) would be valuable. [1] Hafner, Danijar, et al. "Dream to control: Learning behaviors by latent imagination." arXiv preprint arXiv:1912.01603 (2019). [2] Tadahiro Taniguchi, Shingo Murata, Masahiro Suzuki, Dimitri Ognibene, Pablo Lanillos, Emre Ugur, Lorenzo Jamone, Tomoaki Nakamura, Alejandra Ciria, Bruno Lara & Giovanni Pezzulo (2023) "World models and predictive coding for cognitive and developmental robotics: frontiers and challenges," Advanced Robotics, 37:13, 780-806 Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for calling our formulation and derivations solid and well-motivated. In our answers to the specific questions below, we elaborate on the perceived lack of novelty and simplicity of our evaluations. We are a bit unclear about what is meant by the purported weakness that "The evaluation [...] compares with the MCE approach without using the proposed method." In our evaluation, we do use both the MCE method and our proposed method. Could you elaborate? Answers to questions: 1. We acknowledge that there are methods for IRL and IOC in settings with more complicated dynamics models such as walking. However, these methods typically work in deterministic (!), fully-observable (!) settings and assume the agent’s action signals to be observed (!) (i.e. most problems implemented in OpenAI gym and comparable frameworks). Here, we are motivated by problems relevant to cognitive science and neuroscience, where biological systems are often modeled as having noisy sensory and motor systems, resulting in stochastic, partially-observable problems with unobserved action signals. This complicates the inverse optimal control problem significantly. We present a method that explicitly models these factors and therefore achieves good results on a range of different problems, albeit with simpler dynamics compared to methods applied on deterministic fully-observed problems, which is to be expected. If a relevant scenario in robotics is of interest, it could be kinesthetic teaching or imitation learning. Our method e.g. allows taking into account the specific human signal-dependent motor variability or internal biomechanical cost. 2. Thank you for raising this point. We agree that an IOC formulation that derives the generative model for a given forward problem and inverts it using probabilistic inference might seem straightforward. We argue that this is a virtue of our probabilistic problem formulation instead of a shortcoming. We have included a significance statement at the beginning of our answer to all reviewers. We will make sure to include a precise statement about what current methods in IRL and IOC cannot do that our method achieves. Thanks for the suggested references about POMDPs in robot learning, which we happily will include. We agree that extending the proposed approach towards learning of world models is a promising direction, and we will include this in the discussion. However, they concern very different problems. Even if a world model is given, when observing a behaving agent, it is not clear what the internal belief states are, and what the internal cost functions are, particularly, when not observing the full description of the internally generated actions. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response, including the general comment to all reviewers. Now, your motivation and contribution have become clearer to me. >We are a bit unclear about what is meant by the purported weakness that "The evaluation [...] compares with the MCE approach without using the proposed method." In our evaluation, we do use both the MCE method and our proposed method. Could you elaborate? I apologize for causing confusion. Please disregard the phrase "without using the proposed method." I believe the authors have adequately explained their reasons for not incorporating more baseline methods.
Summary: In this paper, the authors propose a method to infer an agent’s internal model in a Partially Observable Markov Decision Process (POMDP) when the agent’s actions are non observable. Using local linearization, the authors show how a closed form approximation of the likelihood function for state trajectories can be constructed to subsequently yield maximum likelihood estimates. They also show that when there are confounding factors that can lead to the same behavior, the proposed method is able to disambiguate the factors better compared to the baseline. **After author's rebuttal:** I appreciate the authors offering clarifications. My main concern was around the significance of the contribution, which I based on the references discussed in the paper since I haven't been working on this specific area myself. However, looking at the other reviews and the author's comments, it appears the paper does more than just relaxing a simple assumption or two. I also mentioned that if the inference mechanism is aware of the generative model, one would expect better estimation in general. I make this comment from a Bayesian perspective so if the observations are from a Gaussian distribution and we model it as such, we expect to learn the sensible parameters with enough data as opposed to an unaware model that assumes, say, a Laplace distribution. I would need to think more as to why this might not hold here, as I am still inclined to believe it does. In any case, based on the discussion of the contributions, I am changing my rating from 4 to 5. Strengths: The paper does a good job at laying down the groundwork for the problem, going over previous work and pointing at the potential shortcomings of existing methods. The motivating example in Fig. 1 works well and the paper’s structure follows naturally. The experiments are well-designed and the results are clearly discussed establishing the technique’s superior performance over the baseline. Weaknesses: However, in my opinion, the contribution lacks significance for acceptance at the venue. The problem formulation is slightly more general than existing work but only marginally so. The linearization to get approximate likelihood is clean but not novel. The results are not surprising since a technique that is aware of the generative assumptions is expected to lead to better posterior estimates. The disambiguation behavior follows from better estimation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our exposition of the problem, the experiments, and the results. However, we would like to ask the reviewer to substantiate the claim that “the contribution lacks significance for acceptance at the venue”. Looking at the publications at Neurips from recent years, IRL methods have been published that are targeted at inferring animal behavior completely excluding perception (partial observability due to sensory uncertainty), assuming the actions to be fully observed (the observed action is assumed to be identical to the planned and intended movement), and, therefore, cannot even accommodate the notion of actions being epistemic or pragmatic. Our problem formulation for partially observable systems with unobserved action signals, which the reviewer calls “slightly more general than existing work but only marginally so”, is motivated by a wide range of experiments in neuroscience and cognitive science. To the best of our knowledge, there had previously been no inverse optimal control method for these cases. We also do not fully comprehend that it is not “surprising” that an inverse method that follows the generative assumptions of the forward problem works well, as no previous method has been available at all. Moreover, we argue that it is precisely this conceptual clarity that makes our method attractive and widely applicable. The better estimation mentioned in the review would not be possible with previous methods, as we show in our experiments. The disambiguation between uncertainty and costs does not follow trivially but stems from using an inference method that accurately incorporates the sensing and acting uncertainties of the agent being modeled. If there are previously published methods that have all these properties, could you please point out IRL or IOC methods involving partial observability of the state, that can distinguish between epistemic and pragmatic actions, and allow for unobserved action signals?
SODA: Robust Training of Test-Time Data Adaptors
Accept (poster)
Summary: This paper proposes SODA, a test-time data adaptor with the black-box source model leveraging Zeroth-Order Optimization for the adaptor, which involves a random perturbation on the adaptor model's parameters. It also considers the scenario, namely SODA-R when the gradient information is available and online setting SODA-O. The proposed method outperforms existing benchmarks, including BETA and DINE. The experiment is comprehensive with CIFAR10-C/CIFAR100-C. However, having more benchmark dataset experiments is better for achieving better confidence with the proposed framework for practical application. Strengths: Test-time adaptor problem is well-motivated, and leveraging the adaptor could potentially save the training cost in practice. Under the limited available information scenario, the authors claim that they successfully tackle the problem better than other benchmarks, such as BETA and DINE. I value the simplicity of the approach only with two components; 1) mutual information maximization and 2) cross-entropy loss with pseudo labels by achieving a better performance. As a black-box scenario, overcoming the lack of gradient is another main ingredient of this paper, and it outperforms other benchmarks in CIFAR10-C/CIFAR100-C. Application of Zeroth-Order Optimization is another key ingredient, but the simplicity is better appealing to me. Weaknesses: 1. I would like to understand the relationship between the model perturbation and the data augmentation in the proposed framework. See the more detailed question in the Questions section. 2. A good combination of the choice $\sigma, \alpha, \tau$ seems to be critical. I see this discussion at C3 in the Appendix, but I suggest that the authors make this more explicit in the main paper. If those parameters are dataset/or task-dependent, it's not yet applicable in practice. 3. Why $\sigma=0.5$ generally achieve a better performance? And a better performance with $\alpha=0.0001$ implies that the mutual information term is the most critical. A further ablation study of each loss term would be necessary. Minor Comments: - L61: SOTA $\to$ SODA? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do we need to apply data augmentations in the SODA framework? I guess not, but data augmentation information, including DINE and BETA, seems missing. In another sense, data augmentation conflicts (like Dropouts) with applying the model perturbation in mutual information calculation - Eq. $(7)$. So if we don't have to apply Data Augmentation, what is the primary source of the outperformance? I would like to see some convincing evidence of the hypothesis. 2. Aligning with the first question, what is the source of the randomness in calculating Eq. $(7)$? Specifically, the perturbation on $\theta$? or data $x_i$? I would like to have confirmation from the authors. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The main limitation of this paper is the scope of the experiments, although an extensive study in CIFAR10-C/CIFAR100-C. There are many other benchmarks in the domain-adaptation, such as Office-31, VISDA, Office-Home, etc. It would be more convincing if the authors could demonstrate the out-performance with a fixed set of hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1:** I would like to understand the relationship between the model perturbation and the data augmentation in the proposed framework. **AW1:** There are **two kinds of “perturbation”** in our proposed work: - **Parameter perturbation** used in gradient estimation of zeroth-order optimization (ZOO) [1]. It is used to overcome the inaccessible gradient problem. - **Data adaptation** achieved by the data adaptor. **Data adaptation is different from data augmentation** in two points: - Working scheme: data adaptation generates perturbations using the network and adds them to the original data samples, while data augmentation performs pre-defined visual transformations to data samples. - Purpose: the perturbations generated in data adaptation are used to adapt test data to the deployed model, while data augmentation is usually to reduce overfitting in neural network training. > **Q1:** Do we need to apply data augmentations in the SODA framework? I guess not, but data augmentation information, including DINE and BETA, seems missing. **AQ1:** Data augmentation is not considered in SODA and DINE, but is used in BETA. > **Q2:** Aligning with the first question, what is the source of the randomness in calculating Eq. (7)? Specifically, the perturbation on $\theta$ or data $x_i$? I would like to have confirmation from the authors. > > **Q3:** In another sense, data augmentation conflicts (like Dropouts) with applying the model perturbation in mutual information calculation - Eq. (7). **AQ2&3:** First, we would like to explain the working scheme of mutual information maximization (IM). IM [2] has two terms: - Conditional entropy which encourages the model prediction to be more certain and form tighter clusters for each class. - Marginal entropy which encourages the model prediction to be more diverse and form more separated clusters among classes. In the calculation of IM, the randomness comes from the distribution of the test data and the uncertainty of the model prediction, not from perturbations on either $\boldsymbol{\theta}$ or $\mathbf{x}_i$. Data augmentation is not conflicted with the calculation of IM in Eq.(7). > **Q4:** So if we don't have to apply Data Augmentation, what is the primary source of the outperformance? I would like to see some convincing evidence of the hypothesis. **AQ4:** We agree that data augmentation is a powerful method to improve the performance of models, but the key idea of SODA is data adaptation instead of data augmentation. The effectiveness of SODA comes from the training of the data adaptor to generate adapted data. The training objective consists of two components: - Supervised training with reliable pseudo-labels to alleviate the data corruption problem caused by unreliable pseudo-labels. - Unsupervised training of data samples with unreliable pseudo-labels to encourage the model prediction on those data samples to be certain and diverse. > **W2:** A good combination of the choice $\sigma$, $\alpha$, $\tau$ seems to be critical. I see this discussion at C3 in the Appendix, but I suggest that the authors make this more explicit in the main paper. **AW2:** Thanks for your constructive suggestions. We agree with the point that choosing a good combination of hyper-parameters may be suboptimal. But as discussed in Appendix C3, our proposed SODA is robust to most combinations. With your kind reminder, we will put this discussion on our main page. > **W3:** If those parameters are dataset/or task-dependent, it's not yet applicable in practice. **AW3:** Thanks for your insightful comments. We also agree that the need of choosing good hyper-parameters may hinder the practicability of our work. But for CIFAR-10-C and CIFAR-100-C datasets used in our main experiments, accuracies before adaptation range from ~10% to ~90% for different corruptions. Using a fixed threshold and ratio, our proposed SODA improves the deployed model for almost all corruptions, showing that SODA can handle distribution shifts to various extents, and is not specifically dataset dependent. > **W4:** Why $\sigma$ = 0.5 generally achieve a better performance? **AW4:** The hyper-parameter analysis experiments are conducted on CIFAR-10-C Gaussian noise corruption. Before adaptation, the initial accuracy is 51.28%. Since $\rho$ ($\sigma$ in your question) should ideally be the noise ratio in the pseudo-labels, i.e. the error rate of the deployed model before adaptation, setting $\rho=0.5$ is expected to have better results. However, in our main experiments, we do not elaborately choose different $\rho$ for different kinds of corruption with different initial accuracies. SODA with a fixed $\rho$ has already improved the deployed model to a large extent. > **W5:** And a better performance with $\alpha$ = 0.0001 implies that the mutual information term is the most critical. A further ablation study of each loss term would be necessary. **AW5:** Thanks for your instructive suggestion. Both loss terms in SODA are useful. Besides the discussion in Appendix C3, we also show the effectiveness of each loss term in the Office-Home Art->Clipart task: |Office-Home|Deployed|Pseudo-label only|IM only|SODA| |-|-|-|-|-| |Art->Clipart|44.47%|44.99%|45.36%|46.53% > **Minor Comments**: L61: SOTA SODA? **A:** Thanks for pointing out this typo, it should be SODA, we will fix it in our revised paper. **Answer for limitations:** With your constructive suggestion, we provide more experimental results in the general response. We also agree that using adaptive thresholds might be a promising improvement in the future. > References: > > [1] Sijia Liu, et al. A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Processing Magazine, 2020. > > [2] Shi, Yuan, and Fei Sha. Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. ICML 2012. --- Rebuttal Comment 1.1: Comment: I appreciate the authors taking the time to answer all questions during the rebuttal. After carefully checking all answers and the source codes, I'm more convinced of the results. Although the real-world scenario deployment is still in question, the adapter combined with the mutual information setting seems to work well under synthetic noise scenarios and is worth presenting. Therefore, I raise my rating one step more. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer H74J! Comment: We are glad to hear that our response has addressed your questions. Thanks for upgrading your score!
Summary: In this work, the authors aim to adapt unlabelled test data to a deployed model without access to its parameters and inner structures during the testing process. Specifically, the authors utilize a data adaptor during testing to map test data into the deployed model, which gradients are estimated via ZOO. Experiments are conducted on CIFAR-10C and CIFAR-100C to verify the effectiveness of the proposed method. Strengths: 1. This paper is well-written and easy-to-follow. 2. The code is available with the submission which improves the reproducibility index of the paper. Weaknesses: 1. Experiments are not convincing. The authors only use CIFAR-10C and CIFAR-100C to verify the effectiveness of their algorithm. I suggest more datasets with various types of distribution shift should be included such as Office-31, Office-Home, PAC, etc. 2. The necessity of ZOO is not clear. I can understand parameters of the deployed model are inaccessible, but why the data adaptor you generate during testing is still a black box? If the parameters of the data adaptor are known, why do not you simply fix the parameters of the deployed model as constants, and use FOO to calculate gradients of the data adaptor? 3. More baselines should be considered. There already exist some TTA algorithms that do not requiring access to parameters of the deployed model, such as T3A [1]. [1] Iwasawa Y, Matsuo Y. Test-time classifier adjustment module for model-agnostic domain generalization[J]. Advances in Neural Information Processing Systems, 2021, 34: 2427-2440. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See the Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors provide the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** Experiments are not convincing. The authors only use CIFAR-10C and CIFAR-100C to verify the effectiveness of their algorithm. I suggest more datasets with various types of distribution shift should be included such as Office-31, Office-Home, PAC, etc. **A1:** Thanks for your instructive suggestion, we further conduct experiments on ImageNet-C and challenging Office-Home domain adaptation tasks to illustrate the efficacy of our proposed SODA framework. The results are shown in the attached file in the general response. > **Q2:** The necessity of ZOO is not clear. I can understand parameters of the deployed model are inaccessible, but why the data adaptor you generate during testing is still a black box? If the parameters of the data adaptor are known, why do not you simply fix the parameters of the deployed model as constants, and use FOO to calculate gradients of the data adaptor? **A2:** Sorry about the confusion. In our work, the data adaptor is not a black box, and its parameters can be accessed and modified. In our settings, the parameters of the deployed model are hidden, so **backward propagation through the deployed model is not allowed**, leading to infeasible gradient computation. Hence, zeroth-order optimization is used to circumvent this problem and estimate gradients w.r.t. parameters of the data adaptor for training of the data adaptor. > **Q3:** More baselines should be considered. There already exist some TTA algorithms that do not require access to parameters of the deployed model, such as T3A [1]. **A3:** Thanks for your constructive suggestion. We also expect more proposed works dealing with the same settings as we do, however, most of the existing works which do not modify the model parameters require access to the extracted features. For example, T3A[1] splits the pre-trained model into a feature extractor and classifier, and uses the features extracted from the feature extractor to form the support set. Compared to their settings, our settings forbid access to features that are stricter and more practical due to intellectual property protection, misuse prevention, privacy concerns in healthcare and finance, etc. > References: > > [1] Iwasawa Y, Matsuo Y. Test-time classifier adjustment module for model-agnostic domain generalization[J]. Advances in Neural Information Processing Systems, 2021, 34: 2427-2440. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you so much for carefully considering the comments in my review. My concerns have been addressed. I also notice that the authors provide a theoretical analysis. Therefore, I have raised my score. Best regards, Reviewer xmoq --- Reply to Comment 1.1.1: Title: Thanks to Reviewer xmoq Comment: Dear Reviewer xmoq, We are glad to hear that our response has addressed your concerns. Thanks for raising your score! Best regards, Authors of #4388
Summary: This paper proposes usage of zeroth order optimization (ZOO) for test-time adaptation (TTA) to ease several practical issues regarding accessing model parameters during TTA. Since the ZOO with pseudo label, which is a standard method in TTA, might cause the unreliable gradient, the paper proposes a sample selection method using the confidence of the prediction and class balance, and use only reliable sample to compute pseudo-label loss. The unreliable sample is used to compute another unsupervised loss to facilitate better adaptation. They show its effectiveness, CIFAR10-C and CIFAR100-C. No theory is provided. Strengths: 1. The paper is generally well written and easy to follow. 2. The usage of zeroth-order optimization in TTA is well motivated and interesting new problem. 3. The proposed sample selection approach based on the confidence and class balance seems not to be revolutionary but sensible. Weaknesses: 1. The experiment is limited to the CIFAR10-C and CIFAR100-C. As usual, I recommend adding experiments on ImageNet-C and some domain adaptation datasets. 2. The technical novelty is not high. Besides, I found that the effectiveness of the selection method proposed in this paper is not fully validated. For example, the paper does not provide ablation about the sensitivity about the threshold parameter. I'm also curious why we should not compute information maximization loss for reliable samples. Besides, the necessity of information maximization loss is not experimentally validated. 3. The selection of the information maximization loss is not well described. Why did you choose the specific loss function? 4. Several details are unclear for me. Including, - What the difference between SODA-R and SODA-FO? - I'm confused by the table 5, since it says that SODA-O as a variant of SODA under *online* settings but seems to repeat the optimization multiple epochs. Do you repeat the optimization after you reach all test dataset? In the case, I have to say that it is not usual online setup. 5. No theoretical results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** The experiment is limited to the CIFAR10-C and CIFAR100-C. As usual, I recommend adding experiments on ImageNet-C and some domain adaptation datasets. **A1:** Thanks for your constructive suggestions. More experimental results on ImageNet-C and challenging Office-Home tasks are shown in the attached file in general response. > **Q2:** The technical novelty is not high. **A2:** We would like to highlight our main novelty and contributions in the following three folds: - **ZOO for test-time data adaptation**: We tackle a challenging and realistic setting where the parameters of the deployed model are inaccessible and unmodifiable. zeroth-order optimization and data adaptation are proposed to solve this problem. - **Label noise in ZOO**: We analyze the effect of label noise in ZOO, and point out that the noisy pseudo-labels can cause biased gradient estimation in ZOO, leading to limited performance of test-time data adaptor. - **New methods for robust test-time data adaptation**: Based on our analysis, we propose SODA to robustly train the test-time data adaptor. SODA separates the pseudo-labels into reliable and unreliable sets and performs semi-supervised learning using cross-entropy loss and mutual information maximization. > **Q3:** Besides, I found that the effectiveness of the selection method proposed in this paper is not fully validated. For example, the paper does not provide ablation about the sensitivity about the threshold parameter. **A3:** The hyper-parameter analysis is presented in Appendix C.3. In response to your kind reminder, we will highlight it in the main paper. > **Q4:** I'm also curious why we should not compute information maximization loss for reliable samples. **A4:** Information maximization works by making the model prediction more certain while keeping diversity in the global structure. The predictions of data samples with reliable labels already have high confidence, thus information maximization is not needed for those samples. > **Q5:** Besides, the necessity of information maximization loss is not experimentally validated. **A5:** The baseline DA-PL in our main experiments shows the effectiveness of information maximization. DA-PL only uses the pseudo-labels to train the data adaptor and makes trivial improvements. Compared to DA-PL, our proposed SODA improves the deployed model to a large extent by separating the dataset into a reliable set supervised trained by reliable pseudo-labels and an unreliable set unsupervised trained by information maximization. > **Q6:** The selection of the information maximization loss is not well described. Why did you choose the specific loss function? **A6:** Thanks for your helpful comments. Because of the high error rate, data samples with unreliable pseudo-labels may be misclassified into classes with large amounts of samples, and hard to separate them. Following previous works [1][2], One useful way to circumvent this problem is to encourage diversity among predictions of each data sample. Information maximization is a widely-used unsupervised loss that can encourage both global diversity and local certainty of model predictions. In response to your kind reminder, we will add this explanation to our revised paper. > **Q7:** Several details are unclear for me. Including, What the difference between SODA-R and SODA-FO **A7:** SODA-R and SODA-FO are both relaxed baselines assuming that gradient computation is allowed from the deployed model while the parameters of the deployed model are still not modifiable. They both use first-order optimization to compute gradients for the training of the data adaptor. As a comparison baseline, SODA-FO keeps everything the same as SODA except the usage of FOO, to show the effect of ZOO in the training of the data adaptor. Based on SODA-FO, SODA-R adopts deeper network architecture of the data adaptor, Adam optimizer, perturbation regularization, and dropout strategy to show better results which can be achieved by SODA under relaxed settings. A more detailed discussion is presented in Appendix C.1. Thanks for your comments, we will put a more detailed explanation in our revised paper. > **Q8:** I'm confused by the table 5, since it says that SODA-O as a variant of SODA under online settings but seems to repeat the optimization multiple epochs. Do you repeat the optimization after you reach all test dataset? In the case, I have to say that it is not usual online setup. **A8:** Sorry about the confusion. The optimization in SODA-O is not repeated after reaching the entire test dataset but only repeats for the current test data batch and the cached queue. During the adaptation of the current test data batch, the previous data batches are no longer available except for those saved in the queue. After reaching the entire test dataset, the whole adaptation process ends. Thanks for your comments, we will put a more detailed explanation in our revised paper. > **Q9:** No theoretical results. **A9:** Thanks for your instructive comments. We provide theoretical analysis about pseudo-label-robust training and ZOO in the general response. > References: > > [1] Jian Liang, et al. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. ICML, 2020. > > [2] Jian Liang, et al. Dine: Domain adaptation from single and multiple black-box predictors. CVPR, 2022 --- Rebuttal 2: Title: Please check if our response clarified your questions Comment: Dear Reviewer SVqY, As the discussion period ends soon, we just wanted to check if our response clarified your questions. Thanks again for your constructive feedback. Best regards, Authors of#4388 --- Rebuttal Comment 2.1: Comment: Thank you for providing detailed responses. I think the response resolve my initial concerns to a good extent. I therefore increase my score and slightly leaning toward acceptance of the paper. --- Reply to Comment 2.1.1: Title: Thanks to Reviewer SVqY Comment: Dear Reviewer SVqY, We are glad to hear that our responses have resolved your concerns. Thank you for increasing your score! Best regards, Authors of #4388
Summary: To better adapt models to test distributions without changing model parameters, this paper utilizes the strategy that trains a data adaptor which can adjust the test data to fit the deployed models. To avoid the potential corruption of data features caused by the data adaptor, the proposed method treats the test-time adaptation process as a semi-supervised learning process. Specifically, the test data points are split into two subsets, including a high confidence set to perform regular cross entropy minimization and a low confidence set (as unlabeled set) to perform mutual information maximization. Strengths: 1. This paper studies a realistic problem in test time adaptation, i.e., the unreliable nature of the pseudo-labels assigned to the test data. 2. This paper proposes solves the low-quality issue of pseudo-labels by transforming the adaptation process as a semi-supervised learning process, in which the data adaptor model is less impacted by the mislabeled data points. 3. The proposed method uses ZOO framework to estimate the gradients of the parameters and can efficiently the problem with a few queries. Weaknesses: 1. The Pseudo-Label-Robust Data Adaptation module is the key contribution of this paper, but the design of this part is too simple. The problem here is actually noisy label learning problem and treating it as semi-supervised learning is a common strategy. 2. The reliable pseudo-label selection process in Subsection 3.3 utilizes fixed threshold or ratio to select data points, which maybe not robust in real-world settings. 3. The experiments on large scale datasets are not presented in the paper. Moreover, the parameter analyses in terms of $\tau$ and $\rho$ are needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: The Pseudo-Label-Robust Data Adaptation module is the key contribution of this paper, but the design of this part is too simple. The problem here is actually noisy label learning problem and treating it as semi-supervised learning is a common strategy. **A1**: We agree with your point that Pseudo-Label-Robust Data Adaptation is one of the key contributions, but we would like to highlight that the **challenges** have three folds. - **Inaccessible model parameters**. The parameters of target models are inaccessible, thus existing model adaptation methods may fail to promote the performance of target models. Accordingly, we propose to employ a data adaptor for target models. - **Infeasible gradients**. Calculating gradients using target models no longer holds in the scenario since model parameters are hidden. Therefore, we propose to employ zero-order optimization (ZOO) to approximate gradients for the update of the data adaptor. - **Noisy label**. The label of test samples is unknown, leading to biased loss values used in ZOO. To this end, we employ the commonly used pseudo-label strategy to perform robust data adaption. Moreover, inspired by your constructive comments, we further give **theoretical analysis** of the mentioned pseudo-label strategy under test-time adaptation scenarios, as shown in the general response. > **Q2:** The reliable pseudo-label selection process in Subsection 3.3 utilizes fixed threshold or ratio to select data points, which maybe not robust in real-world settings. **A2:** Thanks for your insightful comments. We agree with the point that fixed threshold and ratio may be suboptimal. As discussed in Appendix C.3, a good combination of threshold and ratio may be able to achieve better results. Adaptive noise ratio estimation like Gaussian Mixture Model (GMM)[1] might be of help to make SODA more robust. Meanwhile, we can see that SODA still outperforms baselines using fixed threshold and ratio. > **Q3:** The experiments on large scale datasets are not presented in the paper. **A3:** Thanks for your constructive suggestion, which motivates us to verify the efficacy of SODA on a large-scale dataset, i.e., ImageNet-C. The results are shown in the general response. We will add these results and discussion to our revised paper. > **Q4:** Moreover, the parameter analyses in terms of $\tau$ and $\rho$ are needed. **A4:** The hyper-parameter analyses are presented in Appendix C.3. In response to your kind reminder, we will highlight them on the main page. > References: > > [1] E. Arazo, et al, Unsupervised label noise modeling and loss correction, ICML, 2019. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: Your responses have addressed some of the previous concerns. Nevertheless, I still retain concern about the hyperparameter study. Presently, the hyperparameter analysis is confined to CIFAR-10. However, it would be valuable to extend this study to diverse datasets to figure out whether hyperparameters, such as the threshold $\tau$, exhibit substantial variation across different datasts. Consequently, I will keep my current evaluation score.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers for taking the time and effort to review our paper and provide valuable feedback. We would like to thank reviewers for their recognition of our work: 1) our problem is **realistic** (#1NX6), **well-motivated** and **interesting new** (#SVqY and #H74J); 2) our method is **efficient** (#1NX6), **effective** (#SVqY, #xmoq, #H74J), **simple** (#H74J) and **sensible** (#SVqY); 3) our paper is **well-written** and **easy-to-follow** (#SVqY and #xmoq). Besides the response to each reviewer, we would like to provide more **experimental results** on ImageNet-C and challenging Office-Home domain adaptation tasks as constructively suggested by reviewers in the attached PDF file. Furthermore, we would like to provide more **theoretical analysis** here to show that our proposed pseudo-label-robust training strategy can tighten the upper bound of the expected gradient estimation error in zeroth-order optimization. > > Given a data adaptor $\mathbf{G}$ with parameter $\boldsymbol{\theta}$, a deployed model $\mathbf{M}$ and a test dataset $\mathbf{X} = \\{\mathbf{x}_1,...,\mathbf{x}_n\\}$, denote the adapted data sample as ${\mathbf{x}_i^{\boldsymbol{\theta}}}$, the true label of $\mathbf{x}_i$ as $\mathbf{y}_i$, and $\hat{\mathbf{p}}_i^{\boldsymbol{\theta}}=\mathbf{M}\circ \mathbf{G}(\mathbf{x}_i;\boldsymbol{\theta})$. > > According to [1], minimizing the cross entropy loss $\mathcal{L}_{\rm ce}(\mathbf{y}_i, \hat{\mathbf{p}}_i^{\boldsymbol{\theta}})$ is equivalent to maximizing the mutual information $\mathcal{L} _{\rm im}(\mathbf{x}_i^{\boldsymbol{\theta}})$. > > From derivation in Appendix A, with pseudo-label $\hat{\mathbf{y}}_i = \mathbf{y}_i + \boldsymbol{\sigma}_i$, the KL divergence loss at test data point $\mathbf{x}_i$ is: > $$\mathcal{L}_i = -H(\mathbf{y}_i+\boldsymbol{\sigma}_i)+\mathcal{L} _{\rm ce}(\mathbf{y}_i, \hat{\mathbf{p}}_i^{\boldsymbol{\theta}}) - \boldsymbol{\sigma}_i \log \hat{\mathbf{p}}_i^{\boldsymbol{\theta}}.$$ > > Denoting $h(\mathbf{x}_i) = -\boldsymbol{\sigma}_i \log \hat{\mathbf{p}}_i^{\boldsymbol{\theta}}$, the gradient of the KL divergence loss is: > $$\nabla _{\boldsymbol{\theta}}\mathcal{L}_i = \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} + \nabla _{\boldsymbol{\theta}}h.$$ > > Then, in gradient estimation of ZOO, the estimated gradient of the KL divergence loss is: > $$\widehat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _i} = \widehat{\nabla} _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} + \widehat{\nabla} _{\boldsymbol{\theta}}h.$$ > > Hence, before applying pseudo-label-robust data adaptation, the upper bound of expected gradient estimation error is: > $$\mathbb{E}[\parallel \widehat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _i} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _i \parallel_2] \leq \mathbb{E}[\parallel \widehat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _{\rm ce}} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} \parallel_2] + \mathbb{E}[\parallel \widehat{\nabla} _{\boldsymbol{\theta}}h - \nabla _{\boldsymbol{\theta}}h \parallel_2].$$ > > In SODA, by separating $\mathbf{X}$ into reliable set $\mathbf{X}_r$ learned by cross-entropy loss with pseudo-labels, and unreliable set $\mathbf{X}_u$ learned by mutual information loss, the expected gradient estimation error on the whole dataset is: $$ \begin{aligned} & \mathbb{E} _{\mathbf{X}}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _i} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _i \parallel _2]] \\\\ & = \mathbb{E} _{\mathbf{X} _r}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _i} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _i \parallel_2]] + \mathbb{E} _{\mathbf{X} _u}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}{\check{\mathcal{L}} _i} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _i \parallel _2]] \\\\ & = \mathbb{E} _{\mathbf{X} _r}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} + \hat{\nabla} _{\boldsymbol{\theta}}h - \nabla _{\boldsymbol{\theta}}h \parallel _2]] + \mathbb{E} _{\mathbf{X} _u}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}{\mathcal{L} _{\rm im}} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm im} \parallel _2]] \\\\ & \leq \mathbb{E} _{\mathbf{X} _r}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} \parallel _2] + \mathbb{E}[\hat{\nabla} _{\boldsymbol{\theta}}h - \nabla _{\boldsymbol{\theta}}h \parallel _2]] + \mathbb{E} _{\mathbf{X} _u}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}{\mathcal{L} _{\rm ce}} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} \parallel _2]] \\\\ & \leq \mathbb{E} _{\mathbf{X}}[\mathbb{E}[\parallel \hat{\nabla} _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} - \nabla _{\boldsymbol{\theta}}\mathcal{L} _{\rm ce} \parallel _2] + \mathbb{E}[\hat{\nabla} _{\boldsymbol{\theta}}h - \nabla _{\boldsymbol{\theta}}h \parallel _2]]. \end{aligned}$$ > > Thus, the upper bound of gradient estimation error is tightened after applying pseudo-label-robust data adaptation. > Reference: > > [1] Boudiaf, Malik, et al. A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses. ECCV, 2020. Pdf: /pdf/6f346bf4f7ee71018573b031ea831ad5f694c851.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Do Not Marginalize Mechanisms, Rather Consolidate!
Accept (poster)
Summary: The paper presents a framework for causal reasoning that supports the simplification of large structural causal models (SCMs). One key operation that supports this simplification is consolidation of a SCM such that only a subset of endogenous variables are explicitly modelled, but in such a way that all possible interventions are still supported. Another key operation is the partitioning of an SCM into sub-models, in such a way that endogenous variables from one sub-SCM act as exogenous variables in another. Potential reductions in complexity are then discussed by the use of these operations and associated constraints. For instance, replacing equations within SCMs with computationally simpler expressions, and dropping equations which do not relate to variables of interest. Throughout there is a focus on retaining information about the effects of interventions. The paper concludes with two examples: one on modelling tool wear on a milling machine and another, more complex example, exploring planning policies for a simple platformer game. Strengths: * The paper very clearly presents its contribution and relates this well to existing work, justifying the relevance of the contribution to the field. * The paper builds its arguments formally, and with intuition, and presents meaningful relevant results and constraints. * The examples illustrate well the benefits of the proposed framework. * The contributions appear meaningful to me and likely to be of relevance to other researchers, particularly those concerned with large scale causal modelling. Weaknesses: * At times the arguments are a little vague or the explanations incomplete. * Although mostly clear, the formal notation sometimes leaves a little to be desired. * There are a few places where the authors appear to make errors or omissions in their explanations. Issues with understanding: 1. In definition 1, the set of possible interventions is a little unclear. There appears to be 1 possible intervention, $I_i$ per endogenous variable, $X_i$. Or can there be multiple potential, but mutually exclusive, interventions per endogenous variable? What exactly is $I_i$? 2. On line 94 $f_i(\textbf{x}, x_0)$ is used to indicate the structural equation for variable $X_i$ under the intervention that $X_j$ is set to value $x_0$, but this seems a little under-defined to me. 3. On line 98, the authors state that $\mathcal{M}$ entails infinitely many intervened distributions, but this seems to conflict with earlier notation (see 1.). 4. Definition 2 could do with an explanation of $P^{\textbf{I}}_{\textbf{E}}$. I am assuming this means the distribution of target variables in $\textbf{E}$ under some intervention $\textbf{I}$ but I can't see this stated anywhere. 5. The notation $\rho(\textbf{U},\textbf{I})$ first introduced in Definition 2 does not refer to the subset of variables $\textbf{E}$ to which it refers. This would be good practice anyway, but becomes more problematic when partially consolidated SCMs are considered (Def 5) as the variables of interest $\textbf{E}$ are augmented with additional variables that act as exogenous variables in other sub-SCMs. 6. The caption for Figure 2 (right) states that the dotted line indicates explicit computation for $X_2$ but it isn't clear what is meant by this. 7. In section 3, in a number of places (lines 159, 165, 187), there appears to be repeated errors in the notation, e.g. $\textbf{V} \in \textbf{A}$. I think that $\textbf{V}$ is the complete set of endogenous elements while $\textbf{A}$ is a subset of $\textbf{V}$ 8. In lines 175-183, the partitioning of an SCM appears to require that exogenous variables $\textbf{U}_i$ of a sub SCM must be endogenous variables of another sub-SCM in the same partitioned SCM. But could they be truly exogenous variables of the whole system? Also, there appears to be notational irregularities in this paragraph relating to what is a sub-SCM and what is a partitioned SCM. 9. Things get a bit messy around definition 5 with respect to $\textbf{E}$ and $\textbf{E}'$. The distinction between these two sets of variables could be clearer. For instance, if I am considering partially consolideted SCM $\mathcal{M}_{\mathcal{A},\textbf{E}'}$ then how do I know what the set $\textbf{E}$ is that is used to define $\textbf{E}'$. 10. In section 4, there is a chain of inference on lines 234-235 that is difficult to follow (what is the scope of the universal and existential quantifiers?) and some entity $D$ appears without being defined. 11. I got a little lost in section 4.1. In particular, the discussion of conditional branching and stacking was a little vague. ## Post rebuttal After reading the rebuttal and individual responses to the above points, I am raising my recommendation to "accept". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My questions have mostly been articulated in the **Weaknesses** field. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors discuss limitations and the possible relaxations of these effectively. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer ZzBw for the detailed comments in your review which have led to improving our paper to its new version. In the following we go over each of the highlighted points one-by-one. * *Regarding the set of possible interventions:* Thank you for pointing this out, as it helped improve as to avoid future confusion for other readers. In our paper we refer to an *intervention* $I$ as a specific instantiation of the do operator on a variable, e.g. $do(X_0 = 5)$. While sets of multiple interventions $\textbf{I}$ (note the different formatting) can be applied over the whole SCM. In order to restrict notational clutter, we assumed that the set of allowed interventions is countable. Such that we get something like $I_0 = do(X_0=0); \dots; I_5 = do(X_0=5); I_6 = do(X_1=0); \dots$ for instance. While reviewing your comment we found that our formalism might break down for SCM containing uncountable (e.g. real-valued) variables (and did not restrict the actual sets to only contain one intervention per variable). To fix our mistake (and keep notational brevity), we have switched to writing $I_{i,v}$, indicating the intervention as $do(X_i = v)$. In consequence Def 1. bullet point 4 is rewritten as: "$\mathcal{I} \subseteq \bigcup_{i \in [1\dots N]} \bigcup_{v \in {\mathcal{X}\_i}}$ $\{I_{i,v}\}$ such that $\forall \mathbf{I} \in \mathcal{I}. \forall i \in [1\dots N].(\exists! v\in\mathcal{X}\_i. \mathbf{I}\_{i,v} \in \mathcal{I}) \lor (\lnot \exists v \in \mathcal{X}\_i. \mathbf{I}\_{i,v} \in \mathcal{I})$ and $\mathbf{J} \subset \mathbf{I} \in \mathcal{I} \rightarrow \mathbf{J} \in \mathcal{I}$. $\mathbf{I}$ is the set of perfect interventions under consideration. A perfect intervention $\text{do}(V_i = v_i)$ replaces the unintervened $f_i$ by the constant assignment $V_i := v_i$. * *Regarding the "explicit computation" in Fig.2:* With the figure we want to express that the value of $X_2$, as an element in $\textbf{E}'$, is computed by $\rho_{\mathbf{E}'}$ (and there would be no way to depict the variable in the figure otherwise). In contrast to `normal' variables one can not directly intervene on this variable by simply cutting the edge to $\rho_{\mathbf{E}'}$ via an intervention. As stated in the caption, $X_2$ is the output of $\rho_{\mathbf{E}'}$ and not a normal variable. With 'explicit' we indicate that its value is still visible to the user. In contrast, $\rho_{\mathbf{E}'}$ might also compute the value of $X_1$ internally. However, this depends on the compression inside $\rho_{\mathbf{E}'}$ and $X_1$ is not visible to the user. Thanks for the pointer, we've added a brief statement on this to the new paper version. * *Regarding (7). Notational error on $\mathbf{V},\mathbf{A}$*: Thanks for spotting that! Correct, we've fixed this. * *Regarding (8): Truly exogenous variables* Indeed, they could also be truly exogenous variables. We've added a comment on this. * Regarding (8). Sub-SCM and partitioned SCM: As defined in Definition 4 a partitioned SCM $\mathcal{M}\_{\mathcal{A}}$ consist of multiple sub SCM $\mathcal{M}\_{\mathbf{A}_i}$. For better readability we discarded the index $i$ wherever it was undefined and wrote $\mathcal{M}\_{\mathbf{A}}$ instead to mean any (unspecific) $\mathbf{A} \in \mathcal{A}$. We've added a clarifying statement. * *Regarding (9). Clarification on $\mathbf{E},\mathbf{E}'$*: To consider either $\mathbf{E}$ or $\mathbf{E}'$ actually depends on the standpoint of the user (of the consolidation operation). From a computational perspective, $\mathbf{E}'$ is important as it contains all variables that need to be computed by $\rho$. While $\mathbf{E}$ rather captures important aspects of the SCM to the user i.e., variables of interest. However, inferring for instance that $\mathbf{E}'$ can be deduced from $\mathbf{E}$, but not the other way around would pose a strong argument. We therefore choose to refer to sub SCM with $\mathcal{M}\_{\mathbf{E}'}$ (and in the same breath write $\rho_{\mathbf{E}'}$) but use $\mathcal{M}\_{\mathcal{A},\mathbf{E}}$ to retain the initial set $\mathbf{E}$ in the overall notation. Thanks for pointing out this nuanced discussion, we've added a short paragraph. * *Regarding (11): General considerations on compressibility* Computing minimal representation is generally not possible (as outlined in 'General compression of equation systems' of Sec. 4). Nonetheless we intended on giving a more involved perspective on some of the basic structures (chains, forks and colliders) that appear in the SCM's implied graph structure to allow for some local optimizations using our approach (even if they do not allow to arrive at the true minimal representation $f^\star_i$). Ultimately, there is no way of measuring compressibility of equations by only considering their connecting graph structure. However, even considering local substructures of graphs might improve compression. We have added this discussion to section 4.1. Thanks for raising this. In conjunction with other reviewers' comments, we furthermore present a concrete example of consolidating a collider and chain which will be added to the final paper. (Consider consolidation of $\mathbf{A}_1$ and $\mathbf{A}_2$ respectively in the provided PDF. Actual steps are provided in the answer to reviewer tWPL) **Disclaimer:** our actual answer covered all and thus more of your points. This is due to the 6000 character limit by OpenReview for NeurIPS this year. Therefore, we chose to delete points we deemed that you would maybe find less important than what we kept (We incorporated all of your comments into the paper). We can maybe provide them during the author-reviewer discussions. We would like to once again sincerely thank you for thoroughly checking and commenting on our work! It has greatly improved the overall soundness and quality of our paper in its new version and we look forward to some possible further discussions with you. Kind regards, your authors. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. I think I can understand most of these now. I wanted to make the following points: * Regarding the set of possible interventions: I think I see. If I do then there can be 0 or 1 unique intervention values v for each variable $X_i$ in some intervention set $I$, but the set of possible interventions $\curly{I}$ includes all possible values for each variable. Is that right? The set notation you use elsewhere is maybe more compact/readable than the equation in your response. * Regarding the "explicit computation" in Fig.2: I think I understand the explanation. Perhaps you could name the variable differently, e.g. $X_2'$ as it isn't the "same" variable, in the sense that it doesn't have the same properties as $X_2$. For instance, it may be the case that there is some original intervention $do(X_2=v)$, but this may not necessarily change the value of $X_2'$. Or am I misunderstanding? After reading the other reviews, the rebuttal and your responses to each of the reviewers I am raising my assessment to Accept. --- Reply to Comment 1.1.1: Comment: Thank you for further engaging in the discussion, here are our comments to these two points: 1. Correct. The intervention set is a subset of the set of all possible interventions, where the set of all possible interventions is defined as the union over all intervenable variables and a union over each combination of values (from the respective domains of each of the variables) that these variables can take. We have switched to the following set notation: $\mathcal{I} \subseteq \\{\\{I_{i,v\_i}\\}\_{i \subseteq \\{1\dots N\\}}\\}\_{\mathbf{v} \in {\pmb{\mathcal{X}}}}$ where $v_i$ is the i-th element of $\mathbf{v}$ and such that $\mathbf{J} \subset \mathbf{I} \in \mathcal{I} \rightarrow \mathbf{J} \in \mathcal{I}$. (Reminder: $\pmb{\mathcal{X}}$ is defined as the cartesian product covering all variable domains $\mathcal{X}_i$.) Despite the rather minor technical drawback of generating some $\mathbf{I}$ 'multiple times' we appreciate the suggestion of a more compact formalization with reduced constrains. 2. We are sorry about the confusion regarding our answers. The consolidated $X_2$ will behave exactly the same (with and without interventions). This becomes more clear when looking only at the structural equations: $X_2$ is computed via $\rho_{\mathbf{E}'}(\mathbf{U}, \mathbf{I})$, which is required to be consistent with the original $\mathcal{M}$ (- compare to Def. 2). However, when considering Figure 2 without any additional precautions, a potential reader may get the impression that there are now two possible places to intervene: (1) via parameter $\mathbf{I}$, e.g. $\mathbf{I} = \\{ do(X_2 := v)\\}$, of $\rho_{\mathbf{E}'}(\mathbf{U}, \mathbf{I})$ as it computes the value of $X_2$ and (2) via a 'classical' intervention on the graph, $do(X_2 := v')$, cutting the edge between $X_2$ and $\rho_{\mathbf{E}'}$. This can not be, as two, possibly different, intervention values for $X_2$ would lead to an inconsistent behaviour. (Note that in the second case $\rho$ would not only compute an incorrect value for $X_2$, but also incorrectly compute values for the dependent $X_4$ and $X_5$). For this reason - and while observing the value of $X_2$ - we are not allowed to manipulate the connection between $\rho_{\mathbf{E}'}$ and $X_2$ (= we do not allow intervention of case (2)). To indicate this restriction in the graph, we decided for the altered visual representation. Thanks again for asking this. We will make sure to add the main points of this discussion in the figure caption. Thank you for helping in improving our paper by providing all of this great feedback! Also thank you for raising your score even further post-rebuttal to a clear accept. Very much appreciated and kind regards, your authors
Summary: This work introduced a concept of consolidating causal mechanisms to transform large-scale Structural causal models (SCMs) while preserving consistent interventional behaviour. The author shows consolidation is powerful for simplifying SCM, disscuss the complexity and give a perspective on generalization. Strengths: The author builds a solid framework of consolidating causal mechanisms. The expression of notations is adeque. The spirit of compression the causal equation is interesting and straight-forward. Weaknesses: 1. The author should spend more efforts to describe the compression of causal equations clearly, as this is the key contribution of this work. 2. The author did not propose a concrete algorithm to compress the causal equation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can author make a detail derivation on how to extract a consolidating causal mechanism from a toy SCM? 2. In what scale of SCMs can this method handle, and what is the complexity? Can author give some larger scale examples? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: see the weakness and question parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer tWPL for detailing some important aspects that helped improve our paper. Here are some brief comments on what we've improved and the questions you've raised. * *Regarding discussing of compressibility*: Compressing structural equations to a minimal representation is highly dependent on the equations under consideration and probably incomputable for most problems. As there is ultimately no way of measuring compressibility of equations by only considering their connecting graph structure, we opted to provide an discussion with regard to some of the information-theoretic implications that can be inferred in section 4.1. Thanks for highlighting this point. We have modified section 4.1 to more critically discuss this limitations of our approach. * *Regarding an algorithm for causal consolidation*: Nonetheless, we've added the requested algorithm to the paper. Please find it attached PDF in the global response that shows the algorithm with an additional toy SCM for a step-by-step worked-out demonstration at the end of this reply. The presented concrete example consolidates a collider structure (consider consolidation of $\mathbf{A}_1$) and a chain (consider $\mathbf{A}_2$ of the example) matching the contents of section 4.1. Thanks for pointing out both points, we consider especially the addition of the algorithm as a strong contribution to our paper. * *Regarding larger SCMs*: In our example for the CoinRunner game where we consolidate agent behavior captured previously in a large graph as depicted in Figure 4, we can see that consolidation also works in larger structures. We've now added a paragraph to further discuss the general limitations one faces when performing consolidation on larger graphs. Thanks for pointing this out. We also look forward to more discussions with you. **An Example Application of the 'Consolidate' Algorithm** Consider the SCM provided in the general answer with its structural equations and resulting graph (endogenous variables are $B,C,D,E,F,G,H$ with only one exogenous $A$ with each structural equation highlighted on the r.h.s., note that the subscript on $f_x$ denotes the variable to be determined e.g.\ $B\leftarrow f_B(A)$). In the first step, the algorithm's user decides on a partition. Let's consider for instance the following partition i.e., allowed intervention and consolidation sets: $\mathcal{A} = \\{\\{E,F,G\\}, \\{B,C\\}, \\{D,H\\}\\}; \mathbf{E} = \\{ C, F, H \\};$ $\mathcal{I} = \\{ \\{do(D = \text{true})\\}, \\{do(D = \text{false})\\}, \\{do(G = \text{false})\\} \\}$ To finalize our example, a step-by-step application of \textsc{Consolidate} for the cluster $\mathbf{A}_1 = \{E,F,G\}$: *Step 3:* $\mathbf{E}_1 \gets \\{E,F,G\\} \cap \\{C, F, H\\} = \\{ F\\}$ *Step 4:* $\mathbf{E}'_1 \gets \\{F\\} \cup (\text{pa}(\textbf{V} \setminus \\{ E,F,G \\}) \cap \\{ {E,F,G} \\}) = \\{F\\} \cup (\\{A,B,C,G\\} \cap \\{E,F,G\\}) = \\{F,G\\}$ *Step 5:* $\textbf{U}_{\mathbf{A}_1} \gets \text{pa}(\\{E,F,G\\}) \setminus \\{E,F,G\\} = \\{A,E,F\\} \setminus \\{E,F,G\\} = \{A\}$ *Step 6:* $\mathcal{I}_{\mathbf{A}_1} \gets \\{\\{ do(X_i = v) \in \mathbf{I}\ : X_i \in \\{ E,F,G\\}\\}: \mathbf{I} \in \mathcal{I}\\} = \\{\\{ do(G = \text{false}) \\}\\}$ *Step 7:* $\rho_{\mathbf{E}'\_1} \gets \\{ f_E(A) := E \text{ mod } 5 = 0; f_F(A) := E \text{ mod } 10 = 0; f_G (E,G) := A \land B \\}$ *Step 8:* $\rho^\star_{\mathbf{E}'_1} \gets \text{argmin} \mathcal{K}(\rho\_{\mathbf{E}'\_1} ) = \\{\rho_F(A) := A\text{ mod } 10=0; \rho_G(\rho_F, \mathbf{I}\_{\mathbf{A}_1}) := \rho_F \land (do(G = \text{false}) \notin \mathbf{I}\_{\mathbf{A}_1}) \\}$ *Step 9:* $\mathcal{M}\_{\mathbf{A}_1,\mathbf{E}} \gets (\\{F,G\\}, \\{F\\}, \rho^\star\_{\mathbf{E}'_1}, \\{\\{ do(G = \text{false}) \\}\\}, P_A)$ Note how computing $f_E$ is no longer required. In a similar fashion, equations in $\mathbf{A}_2$ resemble a chain that can be composed: $f_C \circ f_B$ (previously called '\textit{stacked}'; cf. Sec. 4.1). Since $|\text{Img}(f_B)|=2$, at least one of the three conditions of $f_C$ (since $f_C$ is a 3-case function) will be discarded. (Eventually yielding $\rho^{\star}\_{\mathbf{E}'_2}{\gets}\\{ \rho_C(A){:=} A \leq 5\\}$). As $D$ is not in $\mathbf{E}$ and not required by any other sub SCM it can be marginalized. $A_3$ then reduces to $\rho^{\star}\_{\mathbf{E}'_3}{\gets}\\{ \rho_H(C,G) := C \lor G\\}$. --- Rebuttal Comment 1.1: Title: Looking Forward to Feedback on Our Response Comment: Dear Reviewer, We appreciate the time and effort that you have taken to provide us with the review. We would like to ask if the reviewer has any further concerns or is satisfied by our responses to the original review. We are looking forward to any further discussion with the reviewer and would like to thank the reviewer again for helping make our paper better. Regards, The Authors --- Rebuttal Comment 1.2: Comment: Thanks very much for the clarification. I acknowledge that I have read the rebuttal. I will keep my score.
Summary: An operation of consolidation on SCMs is defined and its merits laid out in numerous examples. The operation amalgamates variables while preserving aspects of the causal structure. As opposed to the similar operation of marginalization that comes from probability theory and is well-known in causal abstraction, the consolidation operation is capable of representing interventions on those variables that were abstracted since the consolidated variable is of a special kind. It also accommodates compressions since other compatible functions can be assigned to the consolidated variable; this also opens the possibility of widening the domains of the original variables. Strengths: The problem is relatively well-situated within the literature, I believe the results to be sound, and the examples are good. Weaknesses: I am not convinced of the utility of the approach. What open problems does this contribute to solving? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The title seems inappropriate (especially imperative mood). Are you suggesting that consolidation is always superior to marginalization? - Line 4: "Thus, ... analyze." is not a full sentence - Lines 35--36: "Given that ... outcome." is not a full sentence. Awkward transition follows in "That is, ..." - Line 48: "intervention preserving" -> "intervention-preserving" - Line 69: it is inappropriate to cite do-calculus specifically here, rather, this is talking about the foundations of the SCM framework as a whole. - Line 116: "Section 4 four" -> "Section 4" - Is it really necessary to include the causal graph in the right of Figure 4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes, they have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you reviewer CKKq for your detailed review and appreciating our work and good examples. Your comments have helped us improve the paper. We hope to answer all points you made in the following: * *Regarding the title and its implications of "superiority" of consolidation to marginalization:* The short answer is: Yes. The more involved answer is: all authors have discussed this thoroughly in advance and settled for the present title suggesting that we would not claim consolidation to "always [be] superior" due to its normative implications. Consolidation generalizes marginalization in the sense that marginalization can be modeled by consolidating with $\mathcal{I} = \emptyset$. Therefore, if we consider questions that involve causality / causal models, then consolidation necessarily needs to be considered since it can actually handle interventions (all cases where $\mathcal{I} \neq \emptyset$). Thus, if causal questions are not of concern, then marginalization can be considered as remaining the standard procedure. On another note we'd like to point out that marginalization does not require minimality of the resulting representation. Thank you for pointing this out since we are able to see how this is not immediately obvious from the paper and now we've added a corresponding statement to the refined version. * *Regarding Line 69 regarding the do-calculus citation:* We fully agree and have corrected it in the paper. * *Regarding the necessity of the causal graph in the right of Figure 4:* Our intention here was to showcase an example of consolidation for larger graphs. We think that having the graph included in Figure 4 makes it more clear that even domain experts in a certain area (here the specific CoinRunner game) might struggle to reduce the graph to extract (humanly) interpretable / meaningful descriptions of the agent behavior. Consolidation proved to work for this example as efficient as with some more elementary examples. Also the graph in Fig.4 helps to emphasize how consolidation can reduce overall complexity. Nonetheless, to give all the details we had added the full graph (fully readable) to the appendix as mentioned in the paper. * *Regarding the utility of consolidation and open problems that might be tackled:* Thank you for pointing this out since this is a key aspect to why we think consolidation is important. In the following our perspective on this matter: as with marginalization, the operation of consolidation can help with simplifying causal models and therefore remove / abstract away variables that are irrelevant to the users' specific analysis. To include the effects of interventions (which is arguably the key operation in Pearl's causality framework and in more general, philosophical accounts of interventionist causality), the operations of marginalization and causal effect estimation would need to be repeated for every mutilated graph. With consolidation however we can retain the effects of interventions. As an example recall the domino example from Figure 1 / Appendix: we do not have to apply every possible intervention and summarize their outcomes, but can simply read from the consolidated formula, that every possible intervention will prevent the effect from arriving at the last domino. While the domino example specifically is not interesting for any particular analysis, it serves as a conceptual argument that highlights the implications to more important examples such as more classical causal analysis or our second example with agents interacting with an environment (robotics scenarios). More broadly speaking we intend for future work to follow up with researching applications of consolidation after having set the foundations of the operation in this work. Consolidation lays the foundation for justifying interventions onto high-level objects (modeled by consolidated SCM) where interventions can be applied externally but the inner structure of $\rho$ might not be known. Thanks again for highlighting this, we've added a discussion of this to the new paper version. Also thank you for pointing out various grammar and spelling mistakes. We have corrected all of them now. We look forward to any further discussion with you, thanks again. --- Rebuttal Comment 1.1: Comment: I have read the response and will maintain my current rating. --- Reply to Comment 1.1.1: Comment: Thank you for both confirming that you've read our response and that you remain with your positive score. Thanks again for helping improving our paper through your feedback. If there is anything else, then please do let us know. Kind regards, your authors
null
null
Rebuttal 1: Rebuttal: Thanks again to all reviewers for thorough checking and commenting on our work, helping us improve it to its current form. In addition to the individual comments, please see attached the PDF with the generally requested consolidation algorithm. The algorithm summarizes the construction of causal compositional variables and partitioned SCM - as described in the paper - which we have added to the paper as a separate subsection right before the applied examples (sec. 4.2). We have also included a more generic worked-out example show-casing the actual steps of the algorithm and featuring the consolidation of a collider and a chain. Since NeurIPS this year restricts the PDF to only containing tables and figures we provided the actual worked out example in the answer of reviewer tWPL who explicitly requested this addition. We look forward to any further discussion with you. Kindly, your authors. Pdf: /pdf/91133f1e54fbb8888ffa297c36040fe10978e254.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Thrust: Adaptively Propels Large Language Models with External Knowledge
Accept (poster)
Summary: LLM’s parametric memory may be inaccurate or outdated and thus retrieving additional information to LLMs can help but it’s costly and may be noisy. So this paper proposes Thrust to measure the instance-level parametric memory and can help determine whether to use a retrieval module for enhancing LLMs, which is more cost-efficiency. By dynamically using external knowledge, the proposed IAPEK can achieve consistent gains over a large amount of tasks and LLMs. Strengths: 1. The task is important and motivation is well elaborated. 2. Thrust is quite data-efficient to construct, which is valuable in practice. 3. Comprehensive evaluations are conducted and valid points are made. Weaknesses: 1. LLMs used in this paper are not that strong and may not well utilize external knowledge, as pointed by you. I would recommend adding models like FLAN-T5 or other instruction-tuned models that can follow instructions to use external information for QA. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. This paper is well done. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. One assumption of Thrust is that if PTLM has mastered sufficient knowledge for a task, the hidden states could be used for clustering. But whether using external knowledge is not just about if LLMs can use/represent well. For example, some time-sensitive questions like `who is the CEO of Twitter` can be confidently but wrongly answered by LLMs. However, in these cases, LLMs should use external knowledge. 2. Missing related work: Xie et al., *Adaptive Chameleon or Stubborn Sloth*: Unraveling the Behavior of Large Language Models in Knowledge Conflicts. This paper has some conclusions/observations aligned with Thrust: external evidence may mislead the LLMs to generate wrong answers. Given this paper was available after NeurIPS submission deadline, missing it in reference will not negatively affect my evaluation of Thrust. But you may consider adding it in the next version. Minor 3. In appendix, Table 3 should be annotated with red and green like you did to the table in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments and suggestions. We would like to address your concerns as follows. **LLM ability (W1)**: Thanks for pointing out the discussion regarding the choice of models. We will try to add Flan-T5 in the camera-ready version to compare if instruction fine-tuning also helps improve external knowledge utilizability. Our reason to use the original pre-trained T5 is that UnifiedQA is fine-tuned on these models so that we can compare (in Figure 4) how strongly the model can utilize external knowledge after fine-tuning on this target and show the fine-tuning step in UnifiedQA does help. On the other hand, UnifiedQA is still comparatively strong on our set of tasks (given the size). For example, LLAMA2 7B achieves 77.4, 75.2, and 45.9 on BoolQ, ARC-e, and ARC-c, respectively. UnifiedQA 3B achieves 87.8, 73.7, and 64.5 in similar settings (Table 20 in LLAMA 2 paper (https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)). **Time Sensitivity (L1)**: Thanks for mentioning this point. We regard time sensitivity as a part that can be done in the IAPEK framework, but not by Thrust, as the framework is motivated by both noise and staticity. We will bring this up in the future work discussion in the final version: another orthogonal sort of score measuring time sensitivity can be designed to decide if updated knowledge retrieval is necessary, for example, based on (Ning. et al., 2022). **Missing Reference (L2)**: We appreciate your introduction to this great paper. We will add this paper to our discussion regarding the misleading behavior of LLMs in the final version. **Presentation (L minor 3)**: We appreciate your check on our appendix. We will refine Table 3 to make it consistent with the main paper. **References** Qiang Ning, Ben Zhou, Hao WU, Haoruo Peng, Chuchu Fan, Matt Gardner. 2022. A Meta-framework for Spatiotemporal Quantity Extraction from Text. In Proceedings of ACL. --- Rebuttal Comment 1.1: Title: Thanks for Your Response Comment: Thanks for your detailed response and your willingness to continue improving this paper. 1. I agree that UnifiedQA is a strong baseline built upon T5, and I am looking forward to seeing the additional results included in the camera-ready version. 2. It is reasonable to categorize questions as Time Sensitivity and not include them in this paper. The current submission already provides sufficiently comprehensive results. Thank you once again for your reply. I am looking forward to the updated version and your future work on time sensitivity! --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you so much on your comments! We will discuss more on the time sensitivity and work on adding Flan-T5 experiments in the camera-ready version.
Summary: This work proposes methods IAPEK and Thrust to make instance-level decisions about when to utilize external knowledge for question answering. IAPEK is instance-level adaptive propulsion of external knowledge, essentially the use of external knowledge only when it is necessary beyond the base model. Thrust is a heuristic scoring method to decide which questions to use external knowledge on, based on various notions of distance from existing clusters of points. Intuitions for Thrust score are outlined in S2.2. The work demonstrates that Thrust outperforms two baseline methods (random and BM25) at overall accuracy when used to select which instances require external knowledge under a fixed budget. They also demonstrate that using IAPEK with Thrust performs nearly as well as using external knowledge in every case, with a lower computational budget. Strengths: - The work makes useful statements about the complex nature of retrieval based QA -- both in terms of efficiency and the counterintuitive finding that external knowledge does not always help - Thrust seems to outperform baselines at selecting instances that require external knowledge - In general, IAPEK seems to improve efficiency without too much loss of performance compared to always using external knowledge Weaknesses: - The Thrust score does not have theoretical grounding, or sufficient ablations to justify design decisions. These are not both necessary, but more justification (either theoretical or experimental) for specific design decisions would be very useful. A description of intuitions is given at line 105, but no experimental examples are given to demonstrate that these cases are relevant. One useful aspect would be comparing to simpler scores. Only random and BM25 are used as baselines, but what about simpler notions of distance, that either do not involve complex clustering, or do not involve the "mean vector" idea for Thrust? Something simpler like distance from existing points may perform worse, but there it is not clear given the current results included in the paper. Perhaps some of the complexity of Thrust is not required. - More broadly, it would be useful to show, even for a subset of datasets, the full set of possibilities between: {no EK, IAPEK default, BM25, Thrust, distance from overall centroid, full EK} - It is not completely clear what aspects factor into the performance of IAPEK/Thrust. Particularly, the authors mention that external knowledge can sometimes hurt performance, in which case perhaps Thrust is helping less with efficiency, and more with preventing such examples from seeing external knowledge that would reduce performance. One useful aspect would be a more comprehensive version of Table 3 with numbers, to answer the question "how often does thrust outperform full EK?" - Continuing from the point above, it is not clear from the paper whether the justification of Thrust is improving efficiency (i.e. full EK always helps but is expensive) or performance (i.e. there are some examples that do better without EK, and Thrust helps identify these) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Question: have you considered selecting examples based on notions of uncertainty? e.g. just looking at the entropy of the model on the output space, rather than interacting directly with vectors or model internals. It would seem like a less certain model may make more use of external knowledge. See weaknesses for some questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It would be useful to include more information about limitations. What is the maximally useful case you see for your work, and what will still be left to do in solving this problem? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments and suggestions. For your reference, our design choice ablation and limitation discussion are provided in Appendix. We would like to address your concerns as follows. **Design Choice (W1)**: We kindly refer you to Appendix (A.4 and Table 2) included in the supplementary materials. As discussed in Footnote 4 of page 5, we presented our experiments on design choice ablation and known limitations in the appendix. We compared the alternative choices such as no cluster size, no direction, using inertia, etc, to justify our design of Thrust. Thanks for your suggestions and we will include this section in the main paper in the final version. **Results Comparison (W2)**: Thanks for your suggestions on the presentation. Our experiments contain the full set of datasets for {no EK, IAPEK default, IAPEK BM25, Thrust, Full EK}. {no EK, Full EK} are compared in Figure 4. {IAPEK default, BM25, Thrust} are compared in Table 1 and Table 2. These experiments are all conducted on the same set of datasets. We will include the table containing all the entries in Appendix in our final version. **Details about Performance vs. Efficiency (W3)**: Thanks for your suggestions for the table design. In Table 3, we present that in ⅔ cases, Thrust outperforms 99% of full EK performance when saving at least 10% expense. More details are as follows: in ⅓ - ½ cases, Thrust rejects external noise and outperforms full EK. For datasets such as BoolQ and CIKQA, where the external knowledge can be noisy, Thrust identifies “some examples that do better without EK” and always outperforms full EK. For datasets such as e-SNLI, where humans annotated the external knowledge, full EK always helps but is expensive. The specific datasets are as follows: For UnifiedQA-base Thrust > full EK: BoolQ, CIKQA, StrategyQA, ARC-C, TriviaQA, AGNews Thrust > 99% full EK: ARC-E. wq. HotpotQA Thrust < full EK: e-SNLI, TREC, NQ For UnifiedQA-3b Thrust > full EK: BoolQ. CIKQA, ARC-C, HotpotQA Thrust > 99% full EK: StrategyQA, AGNews, ARC-E, TriviaQA, NQ Thrust < full EK: e-SNLI, WQ, TREC We will add a new table and more discussion comparing Thrust and full EK in the final addition. One step further, we also identify this comparison as a Performance-Efficiency trade-off with controlling the expected EK rejection rate. Given extra space provided upon acceptance, we will present and analyze the trade-off with an AUC-ROC-like curve, where a convex curve is desired. **Uncertainty (Question)**: The entropy of the model output could be applied to IAPEK for classification tasks. However, some answers are of various lengths for open-domain question answering. An example of NQ is: What does a drink from Narcissus's Spring cause the drinker to do, the expected answer is “fall in love with themselves” rather than from a fixed label set, which can make the entropy test unstable. To provide a unified framework for classification and open-domain QA tasks, we designed Thrust for IAPEK. On the other hand, Thrust can also be regarded as a way to measure uncertainty. We will point this out as potential future work in the final version. **Limitations**: We kindly refer you to Appendix A.1 contained in the original supplementary materials, where we discuss the potential limitations of our work (e.g., can not be used for black-box LLMs). We will include this section in the main paper in the final version. Thanks again for the suggestions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have raised the score to 5. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your great comments and suggestions!
Summary: The paper addresses the limitations of large-scale pre-trained language models (PTLMs) in effectively utilizing external knowledge. It proposes the instance-level adaptive propulsion of external knowledge (IAPEK) as a solution to leverage external knowledge only when necessary. The paper introduces a novel metric called "Thrust" to measure the knowledgeability of PTLM models at the instance level, using the representation distribution of a small number of seen instances. Extensive experiments demonstrate that Thrust is an effective measurement of PTLM models' instance-level knowledgeability. The paper shows that using the Thrust score as a retrieval indicator achieves significantly higher cost-efficiency compared to naive usage of external knowledge, resulting in a 26% average performance improvement on 88% of the evaluated tasks. These findings contribute to the understanding and real-world application of knowledge-enhanced language models, particularly in scenarios with limited knowledge-seeking budgets due to computation latency or costs. Strengths: 1、The authors clearly highlight the limitations of implicit knowledge in pre-trained language models (PTLMs), such as being opaque, static, and inefficient. This acknowledgment sets the stage for proposing a novel approach to address these limitations. 2、The authors introduce the concept of Instance-level Adaptive Propulsion of External Knowledge (IAPEK) as a solution to address the limitations mentioned earlier. Weaknesses: 1、The methods and experiments are described in an informal and obscure manner, lacking motivation for the specific choices made and failing to compare them with alternative approaches. Moreover, this paper lacks the experimental comparison and discussion with ChatGPT and other instruction fine-tuning large models 2. A major weakness of this work is its lack of reproducibility. The paper fails to provide clarity on whether the external knowledge used is published or not, and it does not explain how one can obtain it. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How does IAPEK combine with instruction fine-tuning of large models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not discuss the significance of this work in the context of large models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments and suggestions. We would like to address your concerns as follows. For your reference, our design choice ablation and reproducibility check are provided in Appendix and referred to in footnotes 4 and 1. Details are as follows: **Design Choice (W1)**: We kindly refer you to Appendix (A.4 and Table 2) included in the supplementary materials. As said in Footnote 4 of page 5 in our submission, we presented our design choice ablation and known limitations in the appendix. We compared the alternative choices such as no cluster size, no direction, using inertia, etc, and validated our design of Thrust. **Reproducibility (W2)**: We kindly refer you to Footnote 1 of page 1 of our submission, where we promise that “the code and data(including the external knowledge collected) will be released upon acceptance”. On the other hand, we introduce the details of the data collection in Section 3.1. Tools used, such as DPR and Wikipedia paragraphs, are all publicly available at https://github.com/facebookresearch/DPR or https://github.com/castorini/pyserini. For more details, we also included a sample dataset (sample_dataset.json) in the supplementary material. Thank you. **Instruction fine-tuning model (Questions)**: We deliberately design Thrust to only rely on the dev and test queries so that the external knowledge can be arbitrary. For example, in the context of the instruction fine-tuning model, we can use Thrust to rank the queries and only conduct Chain-of-thought on the hard examples. On the other hand, we can also use Thrust to suggest if further details of the question are needed to be provided. Thanks for your suggestion on the scope of instruction fine-tuning models, we will suggest this potential usage in discussion in the final version. **Presentation**: We will improve our presentation on methods and experiments in our final version. Some points are mentioned in our rebuttal to Reviewer eb2S.
Summary: The authors propose a thrust score which measures if a pre-trained language (PTLM) model has the knowledge to perform the task. They then go on to use this score to choose when they should use external knowledge (when the thrust score is less). The main crux of the thrust score is knowledge representation. The authors argue that if the PTLM places a sample close to related samples, then it has sufficient knowledge about the sample. I feel the thrust score is very good contribution. The adaptive lookup for knowledge experiments are useful, though there might be other ways to use this thrust score. Strengths: 1. The authors propose a thrust score which is a measure of a pre-trained language models knowledge of the instance. These are based on the distance of a sample to the centroid of a cluster. The clusters are task examples. There can even be mulitple clusters within a class in a classification task. This is measured from representations in the last hidden layer (last decoder layer in T5). 2. Given the authors have a thrust score, they proceed to lookup external knowledge only when the model does not have internal knowledge to predict on a sample. The hypothesis is that looking up external knowledge all the time is wasteful and some times even counter productive because of noisy external knowledge. 3. Overall I think the thrust score is a very useful contribution. 4. The ablations - checking which layer to use for representations, comparison with BM25, comparison with full knowledge usage are meaningful. Weaknesses: 1. The adaptive knowledge injection (while useful and demonstrates that the thrust score is effective) could have benefited with knoweldge probing experiments rather than just QA, MC etc. The performance on (triples) tail prediction task, or even knowledge probing like in the below work, can add be even more interesting. Onoe, Y., Zhang, M.J., Padmanabhan, S., Durrett, G. and Choi, E., 2023. Can LMs learn new entities from descriptions? Challenges in propagating injected knowledge. arXiv preprint arXiv:2305.01651. 2. Some of the claims like below seem a bit subjective and can be avoided. Atleast from the abstract. "we can achieve significantly higher cost-efficiency with Thrust score as the retrieval indicator than the naive usage of external knowledge on 88% of the evaluated tasks with 26% average performance improvement." 3. The results could be presented a bit better. For example, figure 4 seems unnecessarily complicated. It's ofcourse the author prerogative, but it took me a while to get used to the thrust, propulsion terminology. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the thrust score be used for knowledge injection using additional pre-training or fine-tuning? Some time ago, there was a paper that used to additionally pre-train on domains where it was performing worse. 2. Have you considered knowledge prompts or pre-fix tuning on downstream tasks besides adding the external knowledge and "Answer:" to the prompt? There is a typo on line 271. Thrust can help identifies instances requiring Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There isn't a separate discussion on limitations but they do mention tasks where this approach under-performs baselines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments and suggestions. We would like to address your concerns as follows. **Extended Usage (W1 & Q1)**: we agree that adaptive knowledge injection can be extended to other cases such as ECBD or EKP (Onoe et al., 2022, 2023). We will include this line of work in the discussion in the camera-ready version. By design, Thrust is independent of the type of external knowledge, so that the adaptively used external knowledge can be any sort, for example, definitions in EKP. Furthermore, Thrust can also be used as a way to measure performance without extensive fine-tuning. In Section A.4 in the appendix included in the supplementary material, we show that Thrust can also be regarded as a dataset hardness metric following the setting of (Zhao et. al., 2022), so that, with an active learning scheme (e.g., Tamkin et. al., 2022), we can use Thrust to rank example hardness and potentially schedule the pre-training or fine-tuning. **Presentation (W2,3 & Limitations)**: (1) We will remove the subjective expressions (e.g., significantly) and correct all typos. (2) We will simplify the figure by only presenting the results for a part of the models: T5-3b, GPT-J, OPT-30b, and UnifiedQA-3b. We will put the detailed figure in Appendix. (3) We will change propulsion to augmentation. For the term Thrust, we will consider seeking a better acronym; (4) Our limitation discussion was included in Appendix. We will put them back in the main doc in the final version. We discussed the cold start and black-box LLM problems. Thanks so much for the suggestions! **Knowledge Prompts (Q2)**: In this paper, we mainly study when we shall add external knowledge and the way to use that knowledge is out of the scope of this work. We mainly follow the setting of the best knowledge utilization model in our experiments (i.e., UnifiedQA). Will add the discussion about the potential to incorporate Thrust with knowledge prompts and prefix tuning as an important future direction. **References**: Xinran Zhao, Shikhar Murty, and Christopher D. Manning. 2022. On measuring the intrinsic few-shot hardness of datasets. In Proceedings of EMNLP. Alex Tamkin, Dat Pham Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. 2022. Active learning helps pretrained models learn the intended task. In Advances in Neural Information Processing Systems --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for incorporating the suggestions. Yes, understand the focus of your work is on when to use external knowledge. Was just curious if you tried PEFT methods. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you so much for your comments on extending the scope of our work. We will point it out in the discussion and work on how Thrust can cooperate with PEFT methods such as LORA, Prefix-tuning, Soft Prompting, and Adapter. Besides the active learning scheme mentioned, another way can be to use Adapter layer representation to calculate the Thrust score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Recasting Continual Learning as Sequence Modeling
Accept (poster)
Summary: The paper proposes to formulate continual learning as a sequence modeling problem. This new formulation views the model's learning process with continual session data as the forward pass of a sequence model, such as a Transformer, rather than relying on backpropagation. Specifically, keys and values within the Transformer can be interpreted as the internal state of a memory module in continual learning. To optimize the sequence model's parameters for continual learning problems, meta-continual learning is employed. This involves meta-learning the Transformer model using episodic learning. To address the computational and memory costs associated with long sequence modeling, efficient transformers, including the Linear Transformer and Performer, are utilized. The proposed approach's effectiveness is substantiated through performance evaluations across multiple benchmark datasets. Strengths: * Formulating continual learning as the problem of sequence modeling is very novel to me, and the author provides a very detailed explanation of why that two problems can be connected with each other under the framework of meta-continual learning. For example, the similarities between the inner loops of the two problem formulations are illustrated clearly in Algorithm 1 and 2. Furthermore, the paper conceptually outlines the connection between dynamic-architecture CL approaches and standard Transformers. * The introduction is written in a very clear manner, making it easy for readers to grasp the central ideas and the contributions of the study. Weaknesses: * The paper doesn't adequately distinguish between continual learning and meta-continual learning. While the text mentions that meta-continual learning aims to automatically learn how to continue learning as opposed to standard continual learning by manual design, it fails to mention the necessity of a large-scale offline dataset to create a meta-training dataset. In contrast, standard continual learning does not require such an offline dataset for training. * The explanation of meta-continual learning lacks clarity. I recommend that the authors define and explain terms such as episodes, meta-train, and meta-test more explicitly. Particularly, the concepts of a support set and query set, which are typically used under the meta-learning framework, are not mentioned in this paper. * An important baseline in meta-continual learning, "Wandering within a world: Online contextualized few-shot learning," presented at ICLR 2021, is overlooked in the related work section. * While the text acknowledges the limitations of SGD-based meta-continual learning, such as the high cost of second-order gradients and scalability issues, it doesn't reference previous work on efficient meta-learning approaches. e.g. [1]. I would suggest the authors incorporate references to efficient meta-learning techniques and add a dedicated section comparing them to transformers in terms of computational and memory costs. [1] Large-Scale Meta-Learning with Continual Trajectory Shifting. ICML 2021 * The presentation of the experiment results could be improved. A central challenge of continual learning is the balance between catastrophic forgetting and rapid adaptation to new knowledge. However, Table 1 doesn't illustrate the proposed method's effectiveness related to these two criteria. Therefore, I suggest the authors represent the experiment results using a plot charting the number of continual sessions against average accuracy. Such a plot is typically employed in previous works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see questions and suggestions in Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper include the limitations of the paper but fails to mention potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and insightful comments. #### **Continual Learning (CL) vs. Meta-Continual Learning (MCL)** As pointed out by the review, a large meta-training dataset is one of the fundamental assumptions of meta-learning and its variants, such as meta-continual learning. We will make sure to state this assumption clearly in the final draft. #### **Explanation of MCL** We will improve the description of MCL and terminologies following the suggestions. We did not use the terms “support set” and “query set” because (i) the concept corresponding to the support set in MCL is not a set but a stream, and (ii) prior works in MCL (e.g., Javed et al., Beaulieu et al.) did not use them either. But, it is a good idea to draw a connection to the meta-learning terminologies for better understanding. We will update the description accordingly. #### **Wandering Within a World (Ren et al.)** We thank the reviewer for introducing an important related work. However, we find this work is closer to continual meta-learning (CML) rather than meta-continual learning (MCL). These two learning frameworks sound very similar, but they are fundamentally different in terms of their underlying assumptions and objectives. Given the significant confusion they have engendered within the research community, we put a considerable amount of effort into Appendix A, where we summarize and compare various learning frameworks using visual illustrations and formal algorithms. Borrowing the expression from Ren et al., the MCL setting is *episodic*. There are multiple episodes in the meta-training set, and each episode consists of a series of tasks. For each episode, an independent model is produced, which does not need to perform well on tasks from other episodes. In CML, on the other hand, there is no meta-training / meta-test distinction, and there is only one episode. Please refer to Appendix A for more detailed illustrations. We will include Ren et al. in the prior works on CML. #### **Continual Trajectory Shifting (Shin et al.)** This is also an important related work. There has been little work on applying efficient meta-learning techniques to the MCL domain, and we believe this direction would be an interesting research topic in the near future. In Appendix C.6, we discussed the first-order approximation in the MCL setting. We will expand this section to more comprehensively cover various efficient meta-learning methodologies such as Shin et al. #### **Forgetting Analysis** In Appendix C.7, more specifically in Figures C.10 and C.11, we presented a detailed analysis of forgetting. Please understand that the sizes of the plots are too large to be in the main text, considering the page limit. #### **Potential Negative Social Impacts** Since our work is not directly related to applications that are immediately deployable in society, we believed that the potential negative societal impacts would be minimal. However, we will address potential negative impacts in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. Since most of my concerns are addressed, I would recommend an accept and maintain my original score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the reviewer’s recommendation for acceptance. We eagerly anticipate presenting our work at NeurIPS.
Summary: The paper redefines Meta-Continual Learning (MCL) as a sequence learning problem. Following this definition, the paper proposes a Transformer-based meta continual learner. The method is evaluated on several classification and regression tasks. Strengths: Overall, I think the paper is well written and does a good job justifying the approach. The idea of casting MCL as a sequence learning problem is novel (at least in CL) and interesting. Weaknesses: baselines: the paper mentions that Prototypical Networks could be used in MCL. It would be interesting to compare against them. In general, the baselines are a bit limited. It would be interesting to compare against other methods that keep all the data in memory, like Transformers do. minor comments: - line 40-41 claims that sequence learning exploits in-context learning (ICL). However, ICL is a property of large pretrained models, I don’t think it applies to this setting Technical Quality: 3 good Clarity: 3 good Questions for Authors: - LINE 270: use pretrained model. Is this model trained from scratch on each episode? - LINE 303: “task identities are not provided”. However, only the data from a specific task is provided, so it’s equivalent to having task identities. - how robust is this algorithm to the data order? - it is unclear to me what is the computational cost of the method at inference time compared to static networks. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and insightful comments. #### **Baselines** Since Prototypical Network (PN) and GeMCL cannot be applied to domains other than classification, we prioritized testing other baselines that can perform both regression and classification in our initial submission. As suggested, we have tested the Prototypical Network (PN) and also GeMCL (it is a relatively simple extension of PN) on the classification benchmarks and found an interesting trend. We will include the results in the final version. In the following, we compare the classification errors of PN and GeMCL in the 20-task classification benchmarks. | Method | CIFAR-100 | | CASIA | | MS-Celeb-1M | | |---|---:|---:|---:|---:|---:|---:| | | Meta-train | Meta-test | Meta-train | Meta-test | Meta-train | Meta-test | | PN | $0.0^{\pm0.0}$ | $76.6^{\pm0.3}$ | $0.2^{\pm0.0}$ | $0.4^{\pm0.0}$ | $32.5^{\pm0.1}$ | $33.6^{\pm0.1}$ | | GeMCL | $0.0^{\pm0.0}$ | $76.6^{\pm0.4}$ | $0.2^{\pm0.0}$ | $0.4^{\pm0.0}$ | $32.1^{\pm0.1}$ | $33.3^{\pm0.2}$ | | OML | $0.6^{\pm0.1}$ | $89.9^{\pm0.4}$ | $2.8^{\pm0.1}$ | $3.2^{\pm0.1}$ | $41.8^{\pm0.3}$ | $42.5^{\pm0.2}$ | | ANML | $0.4^{\pm0.1}$ | $88.1^{\pm1.4}$ | $3.7^{\pm0.5}$ | $4.3^{\pm0.5}$ | $43.8^{\pm0.3}$ | $44.8^{\pm0.4}$ | | MetaFSCIL | $34.5^{\pm2.1}$ | $82.1^{\pm0.3}$ | $12.0^{\pm0.4}$ | $12.2^{\pm0.5}$ | $57.6^{\pm0.3}$ | $57.8^{\pm0.2}$ | | Transformer | $0.0^{\pm0.0}$ | $82.8^{\pm0.8}$ | $0.3^{\pm0.0}$ | $0.4^{\pm0.0}$ | $29.1^{\pm0.2}$ | $30.0^{\pm0.2}$ | | Linear TF | $0.1^{\pm0.1}$ | $83.4^{\pm0.5}$ | $0.4^{\pm0.0}$ | $0.7^{\pm0.0}$ | $31.1^{\pm0.3}$ | $32.4^{\pm0.3}$ | | Performer | $0.0^{\pm0.0}$ | $82.9^{\pm0.3}$ | $0.5^{\pm0.0}$ | $0.7^{\pm0.0}$ | $32.5^{\pm0.5}$ | $33.7^{\pm0.2}$ | Due to their simplicity, PN and GeMCL are robust to meta-overfitting and significantly outperform all other methods in smaller datasets such as CIFAR-100. However, in the CASIA benchmark, where a larger number of classes reduces the effect of meta-overfitting, their performance is on par with Transformers. Finally, in the most challenging MS-Celeb-1M benchmark, they fall behind Transformers. We suspect that PN's simple algorithm of averaging the embeddings is not sufficient to integrate the information of the training stream if the task distribution becomes more complex. #### **In-Context Learning (ICL)** ICL is often introduced as an *emergent* ability of pretrained LLMs. However, considering the original description in the GPT-3 paper, the definition of ICL does not need to be restricted to pretrained LLMs. One may explicitly train an arbitrary sequence model to perform ICL, which is exactly what we do in this work. --- ### Questions #### **Line 270** Thank you for pointing this out. There is absolutely no SGD update in an inner loop. The parameters of both CNN and Transformer are fixed inside each episode, and only the outer loop updates the parameters. We will make this clear in an updated version. #### **Line 303** We would like to clarify our use of terminology. In the statement 'task identities not being provided,' we referred to the distinction between task-aware and task-agnostic CL. Similar to prior works on MCL, such as OML and ANML, we followed task-agnostic CL settings. Note that the term "task" has different meanings in meta-learning and MCL literature. In the meta-learning literature, a meta-training (or meta-test) set is often said to hold multiple *tasks*, but we use the term “episode” to refer to the corresponding concept in MCL. In MCL, each episode is a CL problem, which consists of $K$ *tasks* randomly sampled from the set of possible tasks. During the test phase of an episode, the model receives an input and is tasked to infer the corresponding output without knowing which task it belongs to. Since the input can belong to any of the $K$ tasks, it is not equivalent to having task IDs. We will improve the description in an updated version. #### **Robustness to Data Order** This is an intriguing question. Unlike conventional (meta-)continual learning methods, we can rephrase the question of “how robust the MCL method is to the data order” as “how robust the sequence model’s ICL capability is to the order of in-context examples.” Thus, the robustness depends on which sequence model is used and how the meta-training set (i.e., a training set from the perspective of sequence modeling) is constructed. We believe as long as Transformer is used as the sequence model and the meta-training set sufficiently covers diverse data orders, our approach should be robust to the data order. #### **Computational cost** Since our main idea is to use generic sequence models as MCL methods, the computational cost depends on the choice of the sequence model. In this work, we tested the standard Transformer, Linear Transformer, and Performer, whose computational costs per test example are $O(T)$, $O(1)$, and $O(1)$, respectively, where $T$ is the number of training examples in an episode. Note that these are the same as the costs of the sequence models inferring one token, given a context of length $T$.
Summary: This paper applies transformers and their efficient variants as sequence models to the problem of meta-continual learning. More specifically, instead of running gradient descent on a stream of training data, this paper trains transformers to do in-context continual learning over the data stream. It then compares these transformer-based approaches to three other MCL baselines on regression and image classification tasks. Strengths: Originality: There have been many works on using sequence models for meta learning, and many works on meta-continual learning. However, to my knowledge this is the first paper where sequence model based meta-learning is tested for continual learning capability. Quality: code included, good reproducibility; rich content in the Appendix, many details and analysis. Clarity: well-written and easy to follow. Many schematic illustrations that make the core ideas clear. Significance: in the long run, being able to leverage data and compute to learn a CL algorithm instead of human-engineering one is an important topic. Weaknesses: The main weakness is that the experiments have not convincingly demonstrated the proposed method is practically useful as a continual learning method. One reason is its scalability to longer sequences. If I understand correctly, during the meta-test, almost all methods are evaluated on episodes of only 20 x 5 = 100 examples. The only experiment that’s slightly longer is Table 2 with 100 x 5 = 500 examples, and only one baseline was included. In comparison, OML showed it can scale to 1000 examples and ANML to 9000 steps. But frankly even these are too short to be useful for continual learning, which is supposed to handle much longer streams than normal deep learning settings. How would this method fare if at test time the episodes are much longer? The second reason is its generalization ability to out-of-distribution data during meta-test. All evaluations in this work assume the same distribution and the same episode length for meta-train and meta-test. However, in real CL applications, one can’t know beforehand the distribution of future data and how long the stream would be. One advantage of SGD-based continual-meta learning approach is that the inner loop uses SGD for optimization, so even if the meta-test data is OOD, SGD can still learn. Can transformers still learn new data if they are OOD? The third reason is that it’s not compared to any competitive conventional CL baselines. Even if the authors only intend to show competitiveness among MCL approaches, it’s still good to have other types of CL approaches for reference. In addition, there are other meta-learning methods that could be competitive baselines but not referenced, for example [1, 2]. Although these meta-learning methods are not particularly designed for continual learning, they have both demonstrated continual learning capabilities. In particular, [1] proposed an optimization-based approach that can extend to arbitrarily long inner-loops and trained a continual learning optimizer with it; [2] notably also applies a transformer as a meta learner over episodes for in-context RL. [1] Meta-Learning with Warped Gradient Descent https://arxiv.org/pdf/1909.00025.pdf [2] In-context Reinforcement Learning with Algorithm Distillation https://openreview.net/forum?id=hy0a5MMPUv Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why is the 100-task MCL only tested on CASIA? 2. The authors mentioned that for OML and ANML, full backprop is used instead of truncated backprop for fair comparison. But truncated backprop was what made these methods scalable to long inner loops, so for the 100-task MCL I think it’s okay to use the truncated version. 3. One advantage of OML that was highlighted in the original paper is that it’s complementary to other CL methods. Can transformer-based MCL methods be combined with other CL methods too? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: As mentioned in the weakness, the main limitation of this method is that it can’t be used for continual learning in practice. I think this paper can be dramatically improved if the authors can run experiments where meta-test episodes are very long and use different distribution and episode length for meta-train and meta-test. It would also be more convincing to compare with more competitive CL baselines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and the acknowledgment of the various strengths of our work. We believe there are some differences in viewpoints between ourselves and the reviewer concerning the relationship between CL and MCL settings. We hope our following responses help close the gap. #### **Continual Learning vs. Meta-Continual Learning** We think the relationship between CL and MCL directly corresponds to the relationship between standard learning and meta-learning. Therefore, CL and MCL are two distinct learning frameworks, each with its own set of assumptions. As meta-learning methods cannot be evaluated in the standard learning setting (there is no other episode to meta-train on), MCL methods should also be compared in the MCL settings as well. Therefore, we present our method as an effective MCL approach rather than a CL method, as prior works on MCL (e.g., OML, ANML) did. #### **Scalability of Learning Paradigms Based on Meta-Learning** The reviewer’s concern about the scalability of MCL methods is a valid point, and we agree that scaling up MCL approaches is an important future work. However, we think the scalability issue does not stem from individual methods but the general meta-learning setting (not limited to MCL). The two-level optimization (inner loop + outer loop) of meta-learning is inevitably resource-intensive, and the size of the inner loop has to be kept small to fit the whole optimization process within an available computational budget. Therefore, existing meta-learning (and MCL) research mostly assumes rather small-scale episodes. Given this perspective, more advances in hardware may be a prerequisite to solving the scalability issue of meta-learning, just like the remarkable progress of large language models (LLMs) enabled by the development of more powerful GPUs and distributed training system. It is important to note that none of the MCL methods (including the baselines) tested in this work has a theoretical limit on the size of the problem it can handle. Given a more computational budget, all the methods can be applied to bigger problems. In addition, we believe our sequence modeling approach has better potential in terms of scalability compared to SGD-based approaches for the following reasons. First, efficiently handling longer sequences is one of the top-priority topics in sequence modeling research, often motivated by the potential applications to LLMs. Our approach can directly benefit from the advances in this field of research. Second, both Transformers and efficient Transformers can compute the inner loop in parallel, unlike the SGD-based approaches that require sequential computation of the inner loop. Therefore, it is easier to take advantage of new hardware with massive parallelism. #### **Handling OOD Data in Meta-Test** Although it is desirable to generalize OOD data in the meta-test phase, it is not the primary goal of either meta-learning or meta-continual learning. The episodes in both meta-training and meta-test sets are generally assumed to be drawn from the same distribution, just like the examples in both training and test sets are assumed to be drawn from the same distribution in standard learning settings. This assumption is also stated in widely circulated works on meta-learning, such as MAML (Finn et al.) or "Meta-Learning in Neural Networks: A Survey" by Hospedales et al. It is also debatable whether SGD-based MCL approaches are truly capable of handling OOD data. As pointed out by the review, SGD update will surely enable learning new data even if it is OOD. Our main concern is whether the SGD-based approaches can effectively prevent the forgetting of such OOD data. In the case of OML, for example, it meta-trains an encoder to produce special features that are robust to forgetting from SGD. However, there is no guarantee that the encoder is still effective for OOD data. #### **Comparison with Conventional CL Methods** Not only in our work but also in OML and ANML papers, CL baselines are NOT compared because CL methods cannot achieve meaningful scores in MCL settings. As we explained above about the scalability issue, the current MCL settings generally focus on few-shot settings, which are unsuitable for general CL algorithms. Meanwhile, we will thoroughly discuss [1] and [2] as the review suggested. --- ### Questions #### **100-Task Experiments** We provided more 100-task experiments in Appendix C.2. Please understand that we had to select only a few key results for the main text due to the page limit. Appendix C also contains many other experiments. #### **Truncated Backprop for OML and ANML** Yes, one may use truncated backprop to improve the scalability of OML or ANML. However, as truncated backprop is an approximation of the full backprop, it can cause larger errors in OML and ANML. Thus, we chose to use full backprop in all methods. #### **Combination with Other CL Methods** Although it depends on the specific formulation of individual CL methods, the model updates of our approach are based on the forward pass of a sequence model (instead of SGD) and thus would be incompatible with SGD-based CL methods. --- Rebuttal Comment 1.1: Title: CL vs. MCL Comment: My sincere apologies for the delay in my response! I would like to express my gratitude for the comprehensive rebuttal provided by the authors, and I do think applying sequence models such as transformers to meta-continual learning is an important direction. However, it is evident that a fundamental difference exists between our viewpoints. And after a thorough examination of the authors' rebuttals, I believe that the gap still remains. The authors assert that CL and MCL are two distinct frameworks and should not be directly compared. They emphasize that their work primarily focuses on presenting an effective MCL method, rather than a CL method. However, I hold the perspective that MCL is merely one approach to CL, achieved by learning to continually learn using some meta-training data. Its ultimate objective is no different than other CL approaches, such as those based on regularization, rehearsal, and expansion. Thus, the effectiveness of an MCL method hinges on its capability as a CL method. The authors tried to draw an analogy between MCL and Meta-learning, contending that MCL should be permitted to make additional assumptions, such as the meta-training and meta-testing data are drawn from the same distribution. While the authors are well within their rights to confine their study to scenarios where this assumption holds, it would render their setting somewhat artificial and inapplicable to many real-world situations. I believe that assumptions should be dictated by the problem rather than the approach. For meta-learning, it makes sense to assume that meta-training and meta-testing data are IID due to realistic scenarios where this is generally valid, such as few-shot learning problems. However, when it comes to continual learning, it seems unreasonable to assume the same distribution between meta-training and meta-testing data, as non-IID data is precisely what makes CL unique. For these reasons, I am regrettably not convinced that the proposed methods are effective MCL methods in their current form. It remains to be seen whether this method can scale to longer sequences, generalize to OOD data, and compete with other conventional CL approaches. That being said, I respect the majority's opinion, and I am fine with accepting this paper if the AC so decides. However, I will maintain my rating, as the paper does not yet meet my personal standard for a NeurIPS publication. --- Reply to Comment 1.1.1: Comment: We truly appreciate the reviewer for the additional time and effort to respond. #### **Our Approach Can Handle OOD** We think we slightly misinterpreted the reviewer’s concern and provided a partly misleading explanation about OOD in our previous rebuttal. As pointed out by the reviewer, each task in a CL episode is generally OOD and has not appeared previously. This key property of CL also holds in our setting too. Specifically, all tasks that appear during the meta-test are OOD at the task level, i.e., they have never been seen before, even in the meta-training phase. In classification benchmarks, the sets of classes in meta-training and meta-test sets are completely disjoint. In this perspective, our approach can handle OOD data, and we have already demonstrated it in our experiments. Our approach can utilize the strong generalization ability of modern sequence models, which is especially highlighted in natural language domains. What we argued in the rebuttal and the paper is about OOD at the episode level, but we think this caused unnecessary confusion. If we construct the episodes of meta-training and meta-test in a similar manner, we can consider their *meta*-distributions of the *episodes* (not their constituent tasks) as the same, even if the individual tasks do not overlap at all. We apologize for the confusion and will improve the explanation. #### **CL vs. MCL** We fully agree that the ultimate goal of MCL may not be different from CL. However, the existence of the meta-training set in MCL is not just a methodological characteristic but a fundamental difference in problem setting. And this difference draws a clear border between CL methods and MCL methods: CL methods do not have a mechanism to utilize the meta-training set, while its existence is a fundamental assumption of MCL methods. This is why we, along with all the previous works on MCL, refrain from comparing with CL methods. #### **Why We Should Take MCL Seriously** Since the reviewer’s criticism is not limited to our work but extends to all the prior works on MCL, we would like to share our thoughts on why MCL research is important. Knoblauch et al. [1] rigorously proved that continual learning, in general, is an NP-hard problem; it is impossible to design a CL algorithm that works universally well in any CL episode. Even in the case of humans, we are naturally good at continually learning some tasks (e.g., memorizing faces) but terrible at others (e.g., memorizing digits). In this regard, for a CL algorithm to perform well, there must be some structure in the CL episode, and the CL algorithm should have prior knowledge to exploit it. We think humans’ CL ability has been meta-optimized for the skills that are useful for survival and reproduction (e.g., memorizing faces). To implement such an ability in an artificial agent, there are two choices: (i) manually designing a CL algorithm with a human prior and (ii) designing an MCL algorithm to let it learn the structural prior from meta-training data. The latter better aligns with Sutton’s *The Bitter Lesson*. Therefore, we strongly believe that MCL research should continue, despite the current limitations. We hope the reviewer understands that MCL research is still at an early stage, and the limitations (especially the scalability) can be resolved by advances in hardware or sequence modeling technologies. [1] Knoblauch et al., Optimal Continual Learning has Perfect Memory and is NP-HARD, ICML 2020.
Summary: This work looks at treating the meta-continual learning problem as instead a sequence modeling problem. Instead of traditional approaches that train a model with an inner loop and then compute a meta-gradient in the outer loop, they replace the inner loop with just inference in a sequence model. The meta gradient step is replaced by a normal gradient step that is taken based on the sequence seen by the model. In order to make this approach work, the paper uses transformers that (1) use causal attention (ie the information transfer only happens forward in time) (2) make use of kernel-based transformers to address the quadratic memory usage of traditional transformers. They test their approach on 4 classification datasets and 3 regression problems. Strengths: - The results show that there is potential for this approach of treating meta-continual learning as sequence modeling. The performance is generally competitive or superior to prior approaches. - The presented method is memory efficient and fast compared to previous MCL approaches, at least when dealing with typical MCL benchmarks. - The paper is well presented and readable, with helpful figures. Weaknesses: - The paper does mention that their model tends to overfit on the CIFAR-100 dataset. The paper hypothesizes that this is because task diversity is lower, but this is not verified experimentally. - The novelty is a bit limited, as the paper simply applies transformers to this task of MCL, but still good enough. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Do you have results showing how much the performance drops off with efficient transformers compared to the standard one as the data size increases? - Do you see different performance trends when you increase data per task as opposed to number of tasks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The paper discusses the performance tradeoff that occurs with using efficient transformers to handle long sequences. I think this is a sizable problem, as the method would likely start underperforming or be unscalable when given more data per sequence. They do point out, however, that since they are using standard transformer models, as progress is made with those architectures, that should map well to their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging and insightful comments. #### **Meta-Overfitting and the Task Diversity** In the context of classification benchmarks, the term “task diversity” mostly refers to the number of classes. We apologize for not using a clearer description. At the top of Table 1, we highlighted the number of classes in each benchmark to show that the number of classes is strongly correlated with the degree of meta-overfitting. #### **Novelty** We respectfully argue that the main novelty of our work is the reformulation of MCL as a conventional sequence modeling problem. The use of Transformers is just a specific example. Our work opens the possibility that any sequence model that will come in the future can be used as an MCL solver as it is. --- ### Questions #### **Efficient Transformers vs. Standard Transformers with More Data** Yes, we do have results. In Appendix C, we present a lot more experiments analyzing the various aspects of our approach. Among them are the experiments with longer episodes (Appendix C.2). We test the standard Transformer, Linear Transformer (the best efficient Transformer), and OML (the best baseline). The errors due to longer episodes increase in the order of Transformer < Linear Transformer < OML. #### **Increasing Examples per Task vs. Increasing the Number of Tasks** This is an interesting question. To check the trends when the number of examples per class increases, we additionally tested OML, Transformer, and Linear Transformer in 20-task 25-shot settings. The results are summarized below with 20-task 5-shot and 100-task 5-shot results for comparison. Every score represents an error (the lower, the better). [CASIA Classification (%)] | Method | 20-task 5-shot | 20-task 25-shot | 100-task 5-shot | |---|---:|---:|---:| | OML | $3.2^{\pm 0.1}$ | $2.4^{\pm 0.0}$ | $6.8^{\pm 0.9}$ | | Transformer | $0.4^{\pm 0.0}$ | $0.3^{\pm 0.0}$ | $1.0^{\pm 0.0}$ | | Linear TF | $0.7^{\pm 0.0}$ | $0.5^{\pm 0.0} $| $2.3^{\pm 0.1}$ | [Rotation] | Method | 20-task 5-shot | 20-task 25-shot | 100-task 5-shot | |---|---:|---:|---:| | OML | $0.971^{\pm0.046}$ | $0.994^{\pm0.004}$ | $0.990^{\pm0.008}$ | | Transformer | $0.040^{\pm0.001}$ | $0.033^{\pm0.001}$ | $0.031^{\pm0.001}$ | | Linear TF | $0.075^{\pm0.002}$ | $0.069^{\pm0.005}$ | $0.047^{\pm0.002}$ | [Completion] | Method | 20-task 5-shot | 20-task 25-shot | 100-task 5-shot | |---|---:|---:|---:| | OML | $0.1092^{\pm0.0002}$ | $0.1079^{\pm0.0002}$ | $0.1087^{\pm0.0001}$ | | Transformer | $0.0999^{\pm0.0002}$ | $0.0973^{\pm0.0002}$ | $0.0989^{\pm0.0001}$ | | Linear TF | $0.1039^{\pm0.0003}$ | $0.1013^{\pm0.0002}$ | $0.1084^{\pm0.0001}$ | We found no significant difference in trends. The order of performance is consistent in all experiments: Transformer > Linear Transformer > OML. In all cases, the 20-task 25-shot setting scores better than other configurations since it has a small number of tasks and a large number of shots. --- Rebuttal Comment 1.1: Comment: I appreciate the response, and am satisfied by the answers. I still recommend an accept, however, I will not be raising my score as I do believe there are limitations to their approach, specifically with respect to the ability to handle long sequences of data. --- Reply to Comment 1.1.1: Comment: We are pleased that the reviewer is satisfied with our response. Additionally, we would like to clarify that handling long sequences is a challenge for *present* Transformers. We emphasize that our main contribution is the reformulation of MCL, such that *generic* sequence models can be directly applied to MCL problems. We strongly believe that sequence models will be consistently improved to handle longer sequences, as they have been in recent years; for example, the context length of language models like GPT has been increasing at an incredibly fast pace. We kindly request the reviewer to consider the limitations of our formulation and the current Transformers independently.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Lending Interaction Wings to Recommender Systems with Conversational Agents
Accept (poster)
Summary: This paper proposes a method that combines offline recommendation learning with online decision tree learning to enable recommendation in a conversational format with a few rounds of interaction. The proposed approach is evaluated on multiple datasets and in various validation settings to assess its effectiveness. Strengths: This paper proposes a novel framework called CORE for uncertainty reduction. CORE improves the performance of recommender systems by revealing user preferences through querying attributes and attribute values. The paper evaluates the performance of CORE using multiple datasets and different recommendation methods. The experimental results demonstrate that CORE improves the success rate of recommendations. Weaknesses: (1) This paper heavily relies on decision trees, and there is a significant design cost involved in creating those decision trees. Decision trees are domain-specific and lack clear design guidelines, and the paper does not discuss this aspect or provide any discussion on it in relation to each experimental dataset. Therefore, the weakness lies in the inability to evaluate this aspect. (2) The reliance on metadata such as price, location, and hotel rank in decision trees makes the approach simplistic. In conversational recommender systems, it is important to consider finer item characteristics that can be obtained from diverse evaluations in user reviews while engaging in conversational interactions with AI. Furthermore, in conversational recommender systems, it is crucial to incorporate language models (LM) to capture linguistic variations and users' intuitive expressions in order to reflect them in the recommendations. However, this aspect is not well addressed in the paper (though it states a little in Section 4). (3) The evaluation setup of this paper raises some concerns. While the main evaluation metric is the number of turns, the criteria used to determine correctness seem to be based solely on whether an item was checked or not. In the case of hotels, for example, it is important to consider whether a reservation was actually made, and relying solely on item checking may lead to an inflated number of correct predictions. And, while evaluations are performed on different datasets, there is a lack of detailed explanations about the specifics of the conversations and the actual recommendations made in each dataset. Without this information, it becomes difficult to fully understand the effectiveness and characteristics of the proposed method. Furthermore, the experimental results of this paper do not include comparisons with other methods or existing approaches. It is important to compare the proposed method with other recommendation techniques in order to clearly demonstrate its advantages and improvements. Moreover, it appears that there is a lack of qualitative evaluation. While the paper utilizes multiple experimental datasets for validation, it is difficult to grasp the nature of the interactions and how satisfactory item recommendations are achieved in each dataset. Given that the conversation is a crucial aspect, the absence of information on the nature of the conversations makes it challenging to form a clear picture of the system's performance. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It seems that the paper does not address the importance of considering finer item characteristics derived from diverse evaluations in user reviews during conversational interactions with AI. Could you please provide some insights into this aspect and its potential impact on the proposed approach? Are there any existing conversational recommender systems that could serve as comparative methods to the proposed approach in this paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper lacks consideration for finer item characteristics obtained from user reviews with language models (LM) to capture linguistic variations and users' intuitive expressions. This limitation restricts the ability to provide more nuanced and personalized recommendations in conversational recommender systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main response above. > CORE heavily relies on decision trees. We want to emphasize that the core idea of our online decision tree is how to compute the certainty gain, as shown in Section 3.1. As the estimated score for each item is calculated offline, there are no extra computation costs to get these scores online. Therefore, CORE also requires the computation to form a chain (as exemplified in Figure A2) instead of an entire tree. Furthermore, since the computation of certainty gain is an non-parametric equation, therefore, the computationally costly part would lie in addressing continuous features. But in the context of RS, most of features (e.g., 88% features in Tmall) are discrete features that is efficient to compute. > It is needed to consider finer characters that can be obtained from diverse evaluations in user reviews. We apologize that we might not fully understand the review’s meaning in terms of “finer characters”. But as a brief background context, CORE could get these “finer characters” (some complicated user perference hidden in textual responses) by designing specific prompts. For example, a teenager would say that her budget is limited as she is a student in conversations. Our Chatbot APIs would encode these information, and if we ask Chatbot APIs by what her preference on attribute “Price”. Chatbot APIs would return “Price: $1-100”. In this case, we can remove attribute Price from our unchecked attributes. We will show more case studies in our revisions to show that CORE would encode these characters by prompt designs. > The evaluation setup of this paper raises some concerns. Firstly, these evaluation metrics are widely used in previous conversational RS papers. Secondly, we agree that these metrics lack of qualitative evaluation. Therefore, we will list some examples of generated converations by CORE and compare them with other conversational RS algorithms by human evaluations. In this regards, each conversation would be evaluated both from how likely the queried items reach user needs and to what extent the texts are friendly to user. Due to time limitation of rebuttal period, we plan to add them in our revision. > Are there any existing conversational recommender systems that could serve as comparative methods to the proposed approach in this paper? Our main topic focus on how to bridge RS to conversational agent, and therefore, our baselines lie in existing conversational RS methods including CRIF [1], UNICORN [2], EAR [3], and CRM [4]. And, the ability of analyzing user textual response largely depends on what kind of LMs (or conversational components) are used in the method. As CORE can utilize Chatbot APIs, therefore, our ability of analyzing textual responses totally depend on what LLMs are used. [1] Learning to Infer User Implicit Preference in Conversational Recommendation. 2022. [2] Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning. 2021. [3] Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. 2020. [4] Conversational recommender system. 2018. --- Rebuttal Comment 1.1: Comment: (1) Thank you for the clarification regarding the computation speed. However, my concern still remains with the design and application of decision trees, which are domain-specific and often lack clear design guidelines. The paper does not appear to address this aspect or discuss its relevance to each experimental dataset. I believe that considering this aspect could provide valuable insights into the applicability and potential limitations of the proposed method. (2) Thank you for providing further insight into the application of CORE in conversational recommender systems. I appreciate the clarification regarding the utilization of specific prompts to capture user preferences and characteristics during conversations. However, my concern still revolves around the consideration of more intricate item characteristics that can be derived from diverse user evaluations present in reviews. For instance, aspects such as musical genre preferences, specific artists, historical periods, and other nuanced interests may require a deeper level of representation than simple attributes like price. Incorporating such detailed characteristics might result in decision trees becoming more complex. --- Reply to Comment 1.1.1: Title: Response to Reviewer SPx5 Comment: Thanks for your reply. We are very happy to see some of our clarifications help. We hope the following can further address your concern. > Design and application of decision trees are domain-specific and often lack clear design guidelines. We apologize that we might not fully understand the review’s meaning of the terms “design” and “application” (if the reviewer could elaborate further, we would be happy to respond more precisely). As a brief background text, we think the concerns may lie in two aspects: (i) How to decide what item (from candidate items) and attributes (or attribute values) (from candidate attributes) to query online? As described in line 5 in Algorithm 1, we compute the expected certainty gain of querying items and attributes (or attribute values) and choose one with the largest expected certainty gain to query. As a result, after multiple runs, CORE can form an online decision tree as illustrated in Figure A2. We want to note that all the above computations only depend on the matrix of candidate items, as illustrated in Figure 1(a), and do not require any handcrafted design. (ii) How to decide candidate items and candidate attributes (i.e., matrix of candidate items). In our experiments, we use all the raw features as the attributes, since CORE, as introduced in Section 3.2, can deal with both discrete and continuous features. In other words, in our experiments, there are no handcrafted designs in both our online and offline components. We also note that, while practical, indeed there are some cases where CORE is required to address large feature space and large item space. In these cases, one simple yet effective solution could be doing an (offline) re-selection for both items and attributes. For attributes, we can compute AUC of RS with solely using each attribute as the input, and select those with high AUC as candidates; while for items, we can rank the items according to the estimated scores (given by RS) and select the top as candidates. We would like to note that feature selection is one of the key topics in data mining field investigating how to extract key features from all the raw features, another distinct topic from conversational RS. > Discussion on its relevance to each experimental dataset. As clarified above, in our results, we use all the raw features as the candidate attributes (to verify that CORE can address both discrete and continuous features) and compute what item and attribute (or attribute value) to query at each turn directly following Algorithm 1. In our experiments, we find that in most cases, discrete features, such as category, play more important roles than continuous space, such as date. This observation would hold for the e-commerce domain as conversational RS is proposed to work on the e-commerce platform. We will summarize the results supporting this observation in our revision. It would be interesting to further explore more observations (i.e., feature selection heuristics) in other domains, which we leave as future work. > More intricate item characteristics that can be derived from diverse user evaluations present in reviews, such as musical genre preferences, specific artists, and historical periods. Thanks for your question. We want to note that our CORE (i.e., online decision tree algorithm) is not a feature modeling algorithm (a.k.a., is not a feature representation learning algorithm). We model user preference from two components: (i) Offline component is RS which is a feature modeling algorithm modeling (offline) features to get user recent (or previous) preferences including user historical data and other side information (e.g., social networks). (ii) Online component is our conversational agent which directly obtains user online preference by querying the user; and our conversational agent empowered by LLM is expected to model user online conversation (a.k.a., online features) into user preference through LLM, e.g., a user is a student (which can be regarded as an online feature) -> user preference on low-price items due to the limited budget. Our CORE is a play-and-play bridge to combine offline and online components where offline features are compressed by RS into estimated scores and online features are summarized by LLM by prompts. If you have any remaining or further concerns, we are very glad to discuss them further.
Summary: The paper proposes a novel framework called CORE that bridges conversational agents and recommender systems via an uncertainty minimization principle. The framework treats a recommender system as an offline relevance score estimator and a conversational agent as an online relevance score checker. The conversational agent can query either items or attributes (or attribute values) to reduce the uncertainty of the user’s preference and find a target item. The paper shows that CORE can be applied to various recommendation platforms and datasets, and can outperform existing reinforcement learning-based or statistical methods in both hot-start and cold-start settings. The paper also demonstrates how to empower the conversational agent with a pre-trained language model to communicate more naturally with the user. Strengths: 1. The paper presents a comprehensive study of the core task of conversational recommendation by considering the complex situations that arise in conversational contexts. 2. The authors conduct an extensive experimental analysis and exploration. 3. The proposed method achieves lower complexity and latency compared to existing state-of-the-art approaches. Weaknesses: 1. The innovations and core contributions of the paper are questionable. Most conversational recommendation papers consist of offline and online components, but CORE does not appear to have any notable novel features despite considering more complex interaction settings. The improvements in these scenarios do not seem to be sufficient to warrant acceptance of this paper. 2. The formatting and organization of the paper could be improved to facilitate readability, with inconsistencies in notation and an excessive number of annotations. The description in Section 3 is too verbose, with the core method not being highlighted adequately. A concise explanation of Algorithm 1 would suffice to clarify the scenario. 3. The core of CORE still relies on an offline recommendation system, and even the online decision making depends on scores from the offline recommendation system. The key challenge of conversational recommendation is determining how to make dynamic decisions based on user interests and conversation context. Existing reinforcement learning-based methods evidently consider more factors, whereas the proposed heuristic method in this paper depends on recommendation scores that themselves have uncertainty in dynamic interactions. The methodological foundations of CORE appear weak and unconvincing. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. The modeling approach preferred by CORE users appears to be more of a heuristic estimate compared to some dynamic decision-making methods in reinforcement learning. How does your method ensure that decision-making performance is better than RL-based methods, given that your method seems more like an improved version of Max-entropy? 2. As mentioned in the Weaknesses Section, all of your decisions rely on the estimated scores from the offline recommendation model. Using this score as a benchmark for uncertainty measurement is dubious. How can you ensure that the recommendation model's estimates are accurate and reliable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: 1. Please refer to the Weaknesses Section. 2. The lack of comparison with the latest CRS methods, such as CPR[1] and UNICORN[2]. These new methods share some similarities with the core settings of the paper, such as UNICORN's decision space, which also includes item space and attribute space. [1] Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020. Interactive Path Reasoning on Graph for Conversational Recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’20). 2073–2083. [2] Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. 2021. Unified conversational recommendation policy learning via graph-based reinforcement learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1431–1441. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main response above. > The innovation and motivation of the paper. We emphasize that the core idea of CORE is not to introduce an online component but to propose a plug-and-play framework to enable any offline RS to online query user preferences without the need for a heavy change on industrial supervised learning based recommendation platform. Moreover, CORE can use powerful Chatbot APIs while RL methods with joint optimizations over RS components and conversational components can not, since there is no gradient through APIs. Please refer to the main response above for detailed comparison between CORE and RL methods. We are very happy to further discuss if you have any further concern. > Heavy notations. Thanks for your suggestion. We will clarify them in the revision. > How do you ensure the CORE outperforms RL? Indeed, RL can consider complicated states and encode more factors. However, RL often requires large numbers of training samples (known as data insufficiency) and relatively small action space (corresponding to querying attributes in our paper). Also, without sufficient training data, RL could not well generalize to open-world cases. Moreover, RL’s performance largely relies on careful reward function design, while CORE could be easily deployed in various use cases. Therefore, in many real-world RS cases, the above requirements of RL could not be fully reached. As a result, RL might not get its best performance. In contrast, CORE, a learning-free method (our online decision tree does not have any parameters to tune online), could achieve stable performance and could achieve better performance as few training samples and a huge action space (i.e., querying attribute values). Please refer to the main response for detailed comparison and a list of experiments for comparison. > The methodological foundations of CORE are weak and unconvincing. An intuitive theoretical explanation of CORE is that RS function can be regarded as a prior from offline training, which is unlikely to perform well in the online setting, so we propose CORE to refine the prior online to better capture user needs. We propose Proposition 2 and Lemma 1 to analyze CORE’s performance on an ideal RS case and a bad RS case respectively. We are very glad to further discuss if you could provide some specific examples of unconvincing foundations. > Lack of comparison between recent RL methods. Thanks for your suggestions. We have included CRIF [1] and UNICORN [2] as two new baselines. Results (Tables R1 and R2) verify that CORE can perform well for querying attribute values, while RL methods excel at querying attributes because querying attribute values holds a huge action space which is not friendly for RL. We will the completed version of the results in our revision. [1] Learning to Infer User Implicit Preference in Conversational Recommendation. 2022. [2] Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning. 2021. --- Rebuttal Comment 1.1: Comment: This is still a borderline paper to me. I would like to leave the decision to ASs.
Summary: The paper is about conversational recommender systems (CRS), which are systems that can interact with users through natural language and provide personalized recommendations. The paper addresses the challenge of incorporating a conversational agent into any existing recommender system in a plug-and-play fashion, without requiring reinforcement learning or data collection. The paper proposes CORE, a novel offline-training and online-checking framework that bridges a conversational agent and a recommender system via a unified uncertainty minimization objective. The paper claims that CORE can benefit any recommendation platform and can handle different types of data, attributes, and queries. The paper reports that CORE outperforms existing methods in both hot-start and cold-start recommendation settings, and can communicate as a human if empowered by a pre-trained language model. Strengths: 1) Originality: The paper proposes a novel offline-training and online-checking framework, CORE, that bridges a conversational agent and a recommender system via a unified uncertainty minimization objective. This approach is original in its ability to incorporate a conversational agent into any existing recommender system in a plug-and-play fashion, without requiring reinforcement learning or data collection. 2) Quality: The paper develops a new human-AI recommendation simulator and conducts extensive experiments on eight industrial datasets with nine popular recommendation approaches. The results show that CORE outperforms existing methods in both hot-start and cold-start recommendation settings. 3) Significance: The proposed CORE framework has the potential to benefit any recommendation platform and can handle different types of data, attributes, and queries. This makes it a significant contribution to the field of conversational recommender systems. Weaknesses: 1) The introduction does not provide enough background and motivation for the problem of conversational recommender systems. It should explain why this problem is important and challenging, and what are the existing gaps and limitations in the literature. A possible way to improve it is to cite more relevant works and compare them with the proposed approach. 2) The proposed approach in Section 3 is not clearly explained and justified. It does not provide enough details and intuition for how the uncertainty minimization framework works, how the expected certainty gain is derived, how the online decision tree algorithm is implemented, and how the dependence among attributes is considered. It also does not discuss the advantages and disadvantages of the proposed approach compared to other methods. A possible way to improve it is to provide more examples, figures, pseudocode, and analysis to illustrate the proposed approach. 3) The experimental setup in Section 4 is not comprehensive and fair. It does not describe how the datasets are preprocessed, how the hyperparameters are tuned, how the baselines are implemented, and how the evaluation metrics are calculated. 4) The experimental results in Section 4 are not convincing and insightful. They do not show the statistical significance of the performance differences, the impact of different factors/hyperparameters, or the qualitative analysis of the generated conversations. They also do not discuss the limitations and challenges of the proposed approach, such as scalability, robustness, diversity, etc. 5) The references in Section 6 are incomplete and inconsistent. Some references are missing important information such as authors, titles, venues, pages, etc., or have different formats or styles. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How does CORE handle the situation when the user’s answer is not clear or consistent with their previous answers? 2) How does CORE compare with other conversational recommender systems that use natural language generation or understanding techniques? 3) How does CORE deal with the trade-off between exploration and exploitation in querying items or attributes? 4) How does CORE adapt to different domains or scenarios of recommendation, such as books, movies, etc.? 5) How does CORE cope with the noise or bias in the offline estimated relevance scores or the online user responses? 6) How does CORE handle the scalability and efficiency issues when dealing with large-scale datasets or action spaces? 7) How does CORE incorporate user preferences on multiple attributes or items simultaneously? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main responses above. > Cite more relevant work. We will include more relevant literature in our revision. > More examples, figures, pseudocode, and analysis. We provide detailed derivations of certainty gain and expected certainty gain in Appendix 1.1. We provide a figure to illustrate an example of the online decision tree in Figure A2. The pseudocode of CORE is available in Algorithm 1. The dependence among attributes is encoded by introducing the dependence into the formulation of certainty gain as shown in Eq. (13). Please check the main response above to see the detailed advantages and disadvantages between CORE and other RL methods. > The experimental setup in Section 4 is not comprehensive and fair. All the raw data (a.k.a., all the features) in these datasets are used to train our RS, and serve as the candidate attributes (and attribute values). CORE has two stages: (i) One is the offline stage where RS is tuned following the classical supervised learning paradigm and hyper-parameters are listed in Appendix A3.4. (ii) The other one is the online stage where there is no tuned needed and there is no hyper-parameter, as summarized in Algorithm 1. All the baselines are described in Appendix 3.3 and we directly follow their official implementations. The computations of evaluation metrics are described in lines 281 to 287. > The experimental results in Section 4 are not convincing and insightful. We will add the t-test evaluation metrics of each table in our revision. As mentioned above, there are no hyperparameters in our online component, we do not discuss the impact of hyperparameters. We have discussed the effect of the offline component: RS from the following three perspectives as shown in the main response above. > They also do not discuss the limitations and challenges. We summarize the comparisons between CORE and RL methods in the main response. > The references in Section 6 are incomplete and inconsistent We will fix it in our revision. > How does CORE handle the situation when the user’s answer is not clear or consistent with their previous answers? Firstly, we allow the user to answer Not Care when the user is not clear about her preference in conversations, as introduced in lines 250-258. Secondly, the inconsistency with their previous answers could be handled by the LLMs (i.e., Chatbots). As stated in Appendix A 4.2, we can use some prompts to ask LLMs about the user preference for specific attributes, and in this case, LLMs are expected to encode the contextual conversations to provide the answers. > How does CORE compare with other conversational RS that use natural language generation or understanding techniques? There are two aspects of evaluating the conversational RS. One is to evaluate whether the system can find an item satisfying the user within the minimum number of turns, which corresponds to the evaluation metrics such as average turns and success rate in the paper. The other one is to what extent the generated texts are friendly to users. From this perspective, we plan to do some case studies on the generated texts by CORE. We will list some examples and compare them with the texts generated by other conversational RS algorithms in terms of human evaluations. Due to time limitation of rebuttal period, we plan to add them to our revision. Also, we want to emphasize that CORE does not focus on the quality of generated language, since we can borrow power from pre-trained Chatbots, as described in Section 3.3 and Appendix A4.2. > How does CORE deal with the trade-off between exploration and exploitation? We introduce Eq. (10) to tradeoff the exploration by querying attributes and exploitation by recommending items. We also note that Eq. (10) is extendable because one can add some weights or regularization terms to fit some particular use cases (i.e., if the use cases prefer exploration, then one can add a penalty on querying item v; otherwise, one can add a penalty on querying attribute w_x). We will discuss this further in our revision. > How does CORE adapt to different domains? From Algorithm 1, one can see CORE only requires the estimated scores of candidate items from RS. Therefore, CORE can be directly adopted in different domains of recommendations once an RS is available. As for the conversational agent, our conversational agent using pre-trained Chatbots could easily adapt to different domains since these Chatbots such as ChatGPT-3.5 carry rich domain knowledge. > How does CORE cope with the noise or bias? Unbiased RS is actually another big topic in the recommendation field, which discussed how to debias from offline data; also addressing ambiguous texts is another big topic in natural language models. Therefore, we argue that these problems are out of the scope of this paper. Moreover, CORE acts as a bridge connecting offline RS and online Chatbots, and therefore, CORE is unbiased and denoised if offline RS and online Chatbots are unbiased and denoised. > How does CORE handle the scalability and efficiency issues? We believe CORE is efficient when scaling up to large-scale datasets, due to the following three reasons: (i) Our RS is offline-trained, therefore there are no training costs online. (ii) In practice, there are multiple stages in recommendation platform including matching, pre-ranking, ranking, re-ranking. CORE only needs to perform on re-ranking stage where there are only tens of items needed to be computed. (iii) In real-world cases, all the RS need to assign scores to each re-ranked item, CORE only brings extra computations on the expectation of certainty gain for each recommendation. > How does CORE incorporate user preferences? As introduced in Algorithm 1, CORE holds sets of unchecked items and unchecked attributes. Then, once the user shows her preferences, CORE can update the sets correspondingly. We summarize how CORE updates in Appendix A 1.1. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the detailed response. I am satisfied with authors' response and improving the score.
Summary: In this paper, a conversational part of a recommender system is proposed. It is assumed that a recommendation model is available that assigns scores to user-item pairs (which estimate the probabilities of acceptance of the corresponding recommendations), items have a number of important attributes (numerical and categorical), and each user has preferences over the values of these attributes. A conversational model should sequentially decide (based on the recommendation scores) whether to ask the current user about the preferenced values of one attribute, or to try recommending an item, which can be accepted or rejected by the user. The goals are to maximize the rate of successful dialogs which accepted recommendations and to minimize the average number of rounds until the user accepts a recommendation. In the current paper, authors propose a simple greedy algorithm that chooses an action that minimizes the expected sum of estimated recommendation scores of the unchecked items, that is, items that are still can be chosen by the user in the light of the obtained information so far. Authors compare their algorithm with two previous baselines based on RL (CRM and EAR) and conclude that the proposed method outperforms RL-based approaches. Strengths: - Paper is mostly well-written, the algorithmic ideas and propositions are clear. - The idea of expected uncertainty minimization is reasonable Weaknesses: 1. The main questionable point is the contribution of the paper. I have the following doubts: - Papers on conversational RS referenced in related work are very old, I do not see any works dated by 21-23 years. Expected gain maximization resembles expected improvement criterion, one of the most widely-used Bayesian optimization algorithms (see "Efficient global optimization of expensive black-box functions." by Jones et al.) Both algorithms are based on the estimation of the expected gain using the posterior distribution. I think, corresponding references are needed here. - Motivation behind the proposed method in comparison with RL-based approaches is not convincing for me. Problem setting does inherently lead to RL approaches: it includes actions, states, rewards, and requires a policy for an optimal trajectory. Why a simple definite empirical greedy algorithm should outperform RL-based models? 2. Theoretical part is not perfect. In problem setting, it would be useful to state explicitly what assumptions underly the proposed method. For example, it in implicitly assumed that there is a unique item the user needs (therefore the sum of probabilities over items equals 1, see Eq. 5). Another example is that information on user preferences on an attribute carries binary information on user preferences on items, what underlies their division on “cheched” and “unchecked”. The motivation behind the approach is based on these assumptions, which are rather controversial in practical cases considered in experiments. Theoretical results are very simple. Claimed propositions are self-evident, but do not carry much understanding on the specific properties, novelty, and contribution of the proposed EG maximization technique. There are some inaccuracies in equations. For example: - Equation for the action space at line 175 looks formally incorrect. As $W_x$ is different for different $x$, we cannot obtain $A$ as a set product. - Eq. 10 Is not formally accurate: the argmax operator should be applied to a function. 3. Experiments and practice. The baselines are rather old (CRM and EAR). What about CRIF from “Learning to Infer User Implicit Preference in Conversational Recommendation”? There is no discussion on the possibility of efficient implementation of the proposed algorithm. How the calculation of uncertainty gains can be implemented in practice? Naïve summation over all items with the given attribute (see, e.g., eqs. 7-8) for each possible attribute values (see eq. 7) looks impractical in real-time conversational recommender systems. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Could you summaries the proposed learning method? - What is the theoretical foundation behind the proposed greedy algorithm of expected gain maximization? Is it suboptimal in some assumptions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: I do not see any limitations of the proposed work Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main response above. > The main questionable point is the contribution of the paper. Please see the summary of our contributions in the main response above. > Reference is old. Thanks for your suggestion. We will add more recent literature in our revision. > Why the proposed method outperforms RL based methods? Indeed, RL can consider complicated states and encode more information. However, RL often requires large numbers of training samples (known as data insufficiency) and relatively small action space (corresponding to querying attributes in our paper). Also, without sufficient training data, RL could not well generalize to open-world cases. Moreover, RL’s performance largely relies on careful reward function design, while CORE could be easily deployed in various use cases. Therefore, in many real-world RS cases, the above requirements of RL could not be fully reached. As a result, RL might not get its best performance. In contrast, CORE, a learning-free method (our online decision tree does not have any parameters to tune online), could achieve stable performance and could achieve better performance as few training samples and a huge action space (i.e., querying attribute values). > Implicit assumption on a unique item of a user and binary information on user preferences. We apologize that we might not fully understand the review’s meaning here (if the reviewer could elaborate further, we would be happy to respond more precisely). But as a brief background context, we do believe CORE does not need these assumptions: (i) Target items do not need to be unique, since we can support a set of target items, as stated as a \in \mathcal{V}^* in lines 131-132, and Eq. 5 is a normalization trick to normalizes the score of each item over all the unchecked items. (ii) We consider the case where users may not care about specific attributes (or attribute values), as stated in line 251. Our experiments are also conducted in this setting, as stated in lines 278. We will further clarify this in our reversion. > Theoretical part is not perfect. Thanks for your suggestion. We will correct them and carefully check our analysis in our revision. > Baseline is old. Comparison with [1] (CRIF) and [2] (UNICORN). Results (Tables R1 and R2) verify that CORE can perform well for querying attribute values, while RL methods excel at querying attributes because querying attribute values holds a huge action space which is not friendly for RL. We will the completed version of the results in our revision. [1] Learning to Infer User Implicit Preference in Conversational Recommendation. 2022. [2] Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning. 2021. > How to implement it online? We believe CORE can be easily implemented online, due to the following three reasons: (i) Our RS is offline-trained, therefore there are no training costs online. (ii) In practice, there are multiple stages in recommendation platform including matching, pre-ranking, ranking, re-ranking. CORE only needs to perform on re-ranking stage where there are only tens of items needed to be computed. (iii) In real-world cases, all the RS need to assign scores to each re-ranked item, CORE only brings extra computations on the expectation of certainty gain for each recommendation. We will clarify this in our revision. > Could you summarize the proposed learning method? We have summarized the main contributions of CORE in the main response above. We are very glad to further discuss if you have any confusion. > What are the theoretical foundations behind it? We apologize that we might not fully understand the review’s meaning in terms of “theoretical foundations” (if the reviewer could elaborate further, we would be happy to respond more precisely). An intuitive theoretical explanation of CORE is that RS function can be regarded as a prior from offline training, which is unlikely to perform well in the online setting, so we propose CORE to refine the prior online to better capture user needs. We propose Proposition 2 and Lemma 1 to analyze CORE’s performance on an ideal RS case and a bad RS case respectively. --- Rebuttal Comment 1.1: Comment: “Implicit assumption on a unique item of a user and binary information on user preferences.“ I mean that the proposed algorithm is not supported by any theoretical guarantees, and it is needed at least some logical motivation. As such, you implicitly assume that each time we seek for an item to recommend for a user, there is only one relevant, target item for that user. Otherwise, the sum of probabilities over items could not sum to 1. You consider prior, probabilities, information, entropy, etc. in your text, not just normalization. Anyways, theoretical assumptions underlying the motivation of the algorithm should be decoupled from details of practical application. --- Reply to Comment 1.1.1: Title: Response to Reviewer 7XeM Comment: Thanks for your further clarifications. We have carefully checked Eq. 5, and we notice that the misleading part is the definition of $V^*$. We apologize for the misleading. We hope the following clarifications could address your concern. > The proposed algorithm is not supported by any theoretical guarantees, and it is needed at least some logical motivation. As such, you implicitly assume that each time we seek for an item to recommend for a user, there is only one relevant, target item for that user. Here, we provide two aspects to explain our Eq. 5. (i) Definition of $V^*$ should be the first item that the user clicks instead of the set of items matching the user's need. For example, assuming that both item A and item B can meet the user’s need; however, the user would firstly only click one item (either item A or B) and then jump to the page showing the detailed information of the clicked item (just like you are browsing the Amazon book store, there are multiple books you are interested in; however, you would firstly click one book you are most interested). In our paper, we define $V^*$ as the set of items matching user needs (e.g., items A and B in the above example), which is indeed not accurate. $V^*$ in this paper should be defined as the item first clicked by the user, because when a user clicks the item and then a user would be posted with another page, and in this case, we consider the session (i.e., the conversation) is finished (as introduced in Definition 1, lines 94-96). In other words, **CORE allows the user to favor multiple items at the same time, but we only focus on the very first clicked item**. In this regard, we can introduce Eq. 5 as $$ Pr(a \text{ is } V^* ) = Pr(a \text{ is } V^* | a \in V_{k-1}) = \Psi_{RE}(a) / \text{SUM}(\Psi_{RE}(v)|v\in V_{k-1}), $$ where we use the estimated scores of RS as the prior to estimate the probability of the user first clicking item $a$. (ii) If we treat each item equally (as there is no prior for all the candidate items), we can define $Pr(a \in V^*)$ as $$ Pr(a \in V^*) = Pr(a \in V^* | a \in V_{k-1}) = \text{SUM}(\Psi_{RE}(v)|v\in V^*) / \text{SUM}(\Psi_{RE}(v)|v\in V_{k-1}). $$ If one compares the above equation with Eq. 5, one could conclude that we are using $\Psi_{RE}(a)$ to estimate $\text{SUM}(\Psi_{RE}(v)|v\in V^*)$, which means that we are implicitly assuming that there is only one item in $V^*$ . However, one assumption that the above equation holds is that all the items are treated equally. Namely, if the above equation holds, we can derive that $Pr(a \in V^*) = Pr(b \in V^*)$ holds for every possible pair of items $a$ and $b$, meaning that the probability of the user favoring each item is the same. This assumption only holds when there is no prior for all the candidate items. Here, we are using the estimated score as the prior to establish a weight factor in the above equation: $$ Pr(a \in V^*) = Pr(a \in V^* | a \in V_{k-1}) = \text{SUM}(\Psi_{RE}(v)|v\in V^*) / \text{SUM}(\Psi_{RE}(v)|v\in V_{k-1}) \times \Psi_{RE}(a) / \text{SUM}(\Psi_{RE}(v)|v\in V^*) = \Psi_{RE}(a) / \text{SUM}(\Psi_{RE}(v)|v\in V_{k-1}), $$ which is our Eq. 5. We will make it clear in our revision. We appreciate the reviewer’s further clarification of the question and we hope the above explanations could address your concern. If there is any remaining concern or further questions, we are always willing to answer. > Anyways, theoretical assumptions underlying the motivation of the algorithm should be decoupled from details of practical application. In our experiments, there can be multiple target items (when they are posted to the user, the user would click) in one session, and we set the session as finished (a.k.a., is succeeded) if there is one target item posted to the user. This empirical setting is consistent with our theoretical definition in Definition 1, and the above explanation about Eq. 5. There is indeed one assumption about the use cases: there is at least one item matching the user's needs. We consider this assumption as a general one of RS, because even if there is no item that can exactly match the user’s need, any RS will still try to recommend the one closest to the user’s need. We will make this clear in our revision.
Rebuttal 1: Rebuttal: We summarize our responses and the results of the suggested experiments here. We also respond to every specific concern of each reviewer as individual comments below. To summarize our contributions: (i) CORE is a plug-and-play method that can enable any (offline) RS to recommend (i.e., query) items and attribute values online. For this purpose, we develop an offline-training and online-checking paradigm, where we regard RS as an offline estimator and an online conversational agent as an online checker. Our proposed online decision tree algorithm can leverage offline estimations from RS to decide what to query online. (ii) CORE can utilize pre-trained Chatbot APIs, since many powerful Chatbots are too heavy to be jointly optimized with RS, even some of them can be only accessed through APIs such as ChatGPT-3.5. (iii) RL often requires large numbers of training samples (known as data insufficiency) and relatively small action space (corresponding to querying attributes in our paper). Also, without sufficient training data, RL could not well generalize to open-world cases. Therefore, in many real-world RS cases, RL might not get its best performance; while CORE, a learning-free method (our online decision tree does not have any parameters to tune online), could achieve stable performance. As one of the main concerns lies in the detailed comparisons between CORE and RL, below is a list of new experiments. 1. Comparison with [1] (CRIF) and [2] (UNICORN). Results (Tables R1 and R2) verify that CORE can perform well for querying attribute values, while RL methods excel at querying attributes because querying attribute values holds a huge action space which is not friendly for RL. 2. Comparisons with a new user simulation with a diversity metric. Results (Table R3) show that RL excels at giving diverse recommendations and therefore can well address the Matthew effect when there are multiple rounds in real-world cases, while CORE is good at giving specific recommendations. 3. Ablation studies with different amounts of training data. Results (Table R4) show that CORE is relatively stable with few training samples since RL often requires a huge number of training data. Tables R1 to R4 can be found in uploaded PDF. Due to time limitations, we only conduct the experiments on LastFM and Amazon datasets, we plan to provide a completed form in our final version. Another concern lies in whether CORE would be significantly affected by RS. Below is a list of old and new experiments answering this: 1. Tables 1 to 4 (A1 to A4) provide the results of cold-start setting (on user side). In these cases, RS knows nothing about the user, and our results show that CORE also could outperform other baselines. Besides, Lemma 1 also provides a bound for the expected number of turns in the context of cold-start setting. These empirical results and theoretical analysis guarantee our performance when RS performs poorly (i.e., assigning the same scores to all the items). 2. Tables 1 to 4 (A1 to A4) also include different RS methods (with different RS performance). Results show that CORE can consistently benefit their RS compared to other baselines. 3. Table R4 shows the result of ablation studies. Different amounts of training data often lead to different RS performances, and CORE can stably outperform other baselines. [1] Learning to Infer User Implicit Preference in Conversational Recommendation. 2022. [2] Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning. 2021. Pdf: /pdf/466c6e68fda7846b8d93477c486ad0bc0025972b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a learning framework called CORE that can incorporate a conversational agent into any recommendation platform, to complementarily check estimated offline relevance scores in each online user session. Experiments results on comprehensive benchmark datasets show that CORE outperforms existing reinforcement learning-based and statistics-based approaches in querying items and attributes/attribute values. Strengths: 1. The proposed offline-training and online-checking paradigm bridges a conversational agent and recommender systems via a unified uncertainty minimization framework. It's effective and efficient, and flexible to handle recommendations in both cold-start and hot-start settings. 2. Various large-scale benchmark datasets are used in experiments, and the results are convincing. Weaknesses: 1. Are the comparison results in Tables 1, 2, 3 and 4 statistically significant? It'd be great if t-test results could be provided. 2. The central idea in this paper is to bring a conversational agent for recommendation systems. If possible, it'd be great to conduct an experiment on a real online platform to have user sessions to show the superiority of the proposed algorithm. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. What kind of data/features in each dataset are used to estimate the offline relevance scores in the first place? Are the accuracy of the offline relevance scores critical to the performance of recommendations after online-checking framework via conversational agent? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main response above. 1. It is better to provide t-test results for tables 1, 2, 3, and 4. Thanks for your suggestion. We will provide t-test results in our revision. 2. It is great to conduct CORE on online platforms. Thanks for your advice. We are thrilled about bringing CORE online to further verify its performance and use cases. However, as it requires deep incorporation with the industry, therefore, we are still looking for potential incorporation. 3. What kind of feature/data is used? We use all the features provided in the public datasets including both discrete and continuous features, e.g., seller ID, item ID, category ID, action ID (showing the type of user behaviors), and date (showing the time of user behaviors) in Taobao for training RS offline and generating estimated scores online. We will add these statistics in our revision. 4. Are the accuracy of offline recommendations critical to CORE? We evaluate the effect of RS performance on CORE from the following three perspectives: 1. Tables 1 to 4 (A1 to A4) provide the results of cold-start setting (on user side). In these cases, RS knows nothing about the user, and our results show that CORE also could outperform other baselines. Besides, Lemma 1 also provides a bound for the expected number of turns in the context of cold-start setting. These empirical results and theoretical analysis guarantee our performance when RS performs poorly (i.e., assigning the same scores to all the items). 2. Tables 1 to 4 (A1 to A4) also include different RS methods (with different RS performance). Results show that CORE can consistently benefit their RS compared to other baselines. 3. Table R4 shows the result of ablation studies. Different amounts of training data often lead to different RS performances, and CORE can stably outperform other baselines. We will make it clear in our revision. --- Rebuttal Comment 1.1: Title: Thank the authors for the response Comment: Thanks a lot for the detailed answers to my questions.
Summary: The authors assert that there is a gap between traditional recommender systems trained on offline (historical) preference data and conversational assistants that aim to elicit user feedback about their *current* preferences. The authors assert that conversational assistant training relies on reinforcement learning-based frameworks and suffer from data insufficiency. They propose the CORE method for learning a conversational recommender system based on the idea of maximizing "certainty gain" by using a pre-trained recommender system as a scoring model for an online decision tree process. Experiments across a wide range of datasets demonstrates consistent improvements over reinforcement learning methods for learning conversational recommenders. Strengths: - The problem formulation makes sense and the authors organize the methodology in 3.1 and 3.2 in an intuitive manner. However, as mentioned in the weaknesses section there could be improvements in clarity. - It is important that the authors extend attribute certainty gain computation to continuous attributes (Lemma 1) as this is a realistic setting for many attributes of interest in real-world situations (see prices, interest rates, or other continuous values). - As a general extension of the above, the authors carefully consider multiple cases that bring the problem setting closer to real-world user interactions and preference considerations (neutral responses, attribute dependence). They extend the derivation of certainty gain across these situations. - Experiments across a variety of datasets and base recommender systems demonstrate that CORE performs better in most cases compared to CRM and EAR baselines. Weaknesses: - The authors should clarify the reasoning for the statement on lines (135-136) that "if [the queried item is in the target set], the sesion is done and therefore the certainty gain is the summation of all relevance scores in V_{k-1}". Why is this the certainty gain? It is rooted in the definition in (1) that the certainty gain of a target item sets all items to "checked for certainty" but it is unclear what the basis is for this or whether it's a choice made to simplify the modeling. - The authors omit discussion of a large area of conversational recommendation focusing on eliciting user feedback via critiques of surfaced attributes [including 1-4]. This area is directly related to the "attribute query" in the online-checking framework of CORE, and merits discussion in the literature review. - The reference to ChatGPT-3.5-turbo seems gratuitous, as there is little mention of it in the main paper other than a statement on (254) that a pre-trained language model can be used as the Psi_{CO} component. - The experimental analysis seems relatively sparse. It would be good to see a deeper analysis (case studies, for example, or human evaluations) in the main paper body. It would also help to see a discussion of the limitations, edge cases, or remaining challenges in the space and what other challenging experimental settings were considered. References: [1] Antognini, D. "Interacting with Explanations through Critiquing" (2021) [2] Antognini, D. "Positive and Negative Critiquing for VAE-based Recommenders" (2022) [3] Li, S. "Self-Supervised Bot Play for Conversational Recommendation with Rationales" (2022) [4] Wu, G. "Deep language-based critiquing for recommender system" (2019) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors do not really discuss limitations of the work in a substantive way. I would like to see a greater discussion of challenges and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions. Please also see the main responses above. > Why is this certainty gain from finding an item satisfying user needs in lines 135-136? As defined in Eq. (2), our objective is to find an item satisfying user needs, and then we assume that the session is finished because the user would click the item to jump to the item page. Therefore, as shown in Definition 1 (lines 95-96), the certainty gain from finding an item satisfying user needs should be the maximum value. Therefore, we set it as the summation of all the unchecked relevance scores. We will further clarify this in our revision. > More literature on focusing on eliciting user feedback via critiques of surfaced attributes. Thanks for your suggestion. We plan to add these papers to our revision and discuss the connections between them and our method. > Little mention ChatGPT-3.5. Thanks for pointing it out. Our conversational component has two parts: one is how to decide what items or attribute values to query (which corresponds to our online decision tree algorithm), and the other one is how to generate querying texts and analyze user responses. As stated in Section 3.3 and Appendix 4.2, ChatGPT-3.5 is used to scale CORE up to free-text real-world use cases, which indeed is an alternative to other Chatbots such as Llama 2. We will re-organize the main text and the appendix to better explain the utility of ChatGPT here. > Deeper analysis (e.g., case studies, for example, or human evaluations) is needed. Thanks for your suggestions. We provide a case study of CORE in Figure A2, and some examples of generated texts can be found in Appendix 4.2. We plan to include more examples and human evaluations of the generated conversations in the revision. > Further discussion of the limitations, edge cases, or remaining challenges. Thanks for your suggestion. Tables 1 to 4 (A1 to A4) show that CORE can well address cold-start problem on user side (where RS knows nothing about the user, and therefore offer equivalent treatment on all the items). But, when it comes to cold-start problems on item side (where RS knows nothing about the item), it is still an opening problem of how to use CORE to help RS to address this issue. We will include this edge case in our revision. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed responses to my review & the others. The analysis and limitations remain an area of improvement for me regarding the paper, and my stance is still "weak accept".
null
null
null
null
Neural Ideal Large Eddy Simulation: Modeling Turbulence with Neural Stochastic Differential Equations
Accept (poster)
Summary: The authors introduce a neural SDE model for LES flow fields. The model structure is motivated by the ideal LES approach and the model learns a data-driven closure term. The learned latent representation captures the variability and fine-scale structure of the fully-resolved DNS solution that is lost by the LES filter. Strengths: The paper is excellently motivated, presented and embedded into the theory. Stochastic modeling for turbulence as presented in this paper is an important direction to explore and therefore the paper would be valuable contribution to the field. Weaknesses: 1. The authors seem to use the term "closure model" more loosely than usual in the introduction. Out of the citations for data-driven closure models in line 39, only [45] defines a closure model as the term is used in the context of RANS and LES. 1. Lines 177-182 are slightly misleading as they can be understood to suggest that the model would learn possible DNS realizations as $Z_t$ instead of an opaque latent representation. (Actually, lines 211-213 clarify this but could be moved forward/merged into the first paragraph) 1. A figure showing the flow (and predictions) after, for example, 200, 400, etc. steps would help the reader appreciate how chaotic the flow is and how much the velocity field evolves over 800 steps. 1. A figure comparing the DNS and filtered data would also be helpful for the reader to understand the effect of LES filtering better. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Eq (2): Why can the pressure be eliminated? 1. Line 87: Which boundary conditions do you impose and how do you do so implicitly? 1. Is there a mix-up of $h_\theta$, $h_\psi$, $h_phi$, $g_\theta$ and $g_\phi$ happening in Eq (8), (9), (10)? 1. Eq (14): You could define $f$ more explicitly here. 1. Eq (16): What do you mean with spatio-temporal lifting? Does w correspond to z? What is $\mathcal{D}$? 1. What exactly is the relationship between the SDE solver timestep and the simulation timestep? Are the temporal intervals $\mathcal{T}_i$ rescaled to $[0, 1]$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discuss limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed and thoughtful review. We are glad that you found our paper well-motivated and that it provides an important direction to explore, viz. stochastic modeling of turbulence using neural networks. **Writing** >The authors seem to use the term "closure model" more loosely than usual in the introduction. Out of the citations for data-driven closure models in line 39, only [45] defines a closure model as the term is used in the context of RANS and LES. That is a good catch! We’ve indeed used broader strokes in the introduction to give an overview of the extensive literature in closure models and other forms of data-driven approaches for turbulence. We will fine-tune the writing in the introduction to reflect the specific nature of closure modeling for LES and RANS that we undertake in this work. >Lines 177-182 are slightly misleading as they can be understood to suggest that the model would learn possible DNS realizations as Z_t instead of an opaque latent representation. (Actually, lines 211-213 clarify this but could be moved forward/merged into the first paragraph) In line 177-182, we started the explanation by introducing one idea at a time, that of the need to model the DNS, and later on specify we just need a latent representation and not the whole DNS. You are right that it can create confusion as to what our method is actually doing. Instead of saying that we “generate DNS trajectories” we can be more explicit at the beginning of the first paragraph that we learn a representation of said DNS trajectories. Furthermore we can clarify in these paragraphs that we indeed learn an opaque representation in latent space for these DNS trajectories. **Figure showing the flow evolution and the DNS snapshots with corresponding filtered versions** Thank you for the excellent suggestions! We will include these figures in the camera-ready version of the paper. **Elimination of pressure** The pressure can be eliminated in the Navier Stokes equations by e.g. taking the divergence of the momentum equation. We use the fact that the divergence of velocity is zero. It leads to a Poisson-like equation for the pressure, which can then be substituted into the momentum equation again. This results in an equation where the velocity is the only unknown. For more details, we provide the reference [1, page 295, section 6.2.2]. We will also add the reference to the corresponding sentence in the camera-ready version of the paper. **Boundary conditions** The boundary conditions we impose in the Kolmogorov flow are 2D periodic along both axes. In the newly added cylinder wake dataset, we follow a setup of a static circular cylinder with no slip boundary condition on the cylinder surface, constant Dirichlet boundary condition on the inflow wall, periodic along the vertical axes, and homogeneous Neumann on the outflow wall. (Please see our response to reviewer SnNJ for more details.) In line 87, by ‘implicit’ we meant that the boundary conditions are assumed to have been enforced in subsequent equations (Eq. 4, Eq. 5) without mentioning the fact explicitly. We can see how it may cause confusion with implicit numerical methods or linear solves. We apologize for the oversight. We shall rephrase this section to avoid confusion. **Notational issues in Eq (8), (9), (10) and (14)** Thank you for catching the error. Eq (8) should say $g_\theta$ instead of $g_\phi$; and Eq (9) should say $h_\phi$ instead of $h_\psi$. We will fix these in the manuscript. Appendix D also reiterates the neural SDE equations with the corrected notation. Thank you for the suggestion on Eq (14), in the text we can spell out $f$ in terms of the small scale fluctuations induced on the LES field from the DNS field. **Spatio-temporal lifting and SDE solver/simulation timesteps** We refer to the DNS trajectory $w$ as being the spatio-temporally lifted version of the latent space trajectory $Z_t$, as the $w$ lives in both spatially higher resolution as well as temporally in the finer timescale. $\mathcal{D}$ is the decoder (defined on line 203) which sends the latent space trajectory $Z_t$ to the fluctuation on the LES field caused by the DNS trajectory $w$. The output space of $\mathcal{D}$ has the same dimensionality as the LES field. The SDE timesteps are the same as the DNS simulation timesteps, which are an order of magnitude smaller than the LES simulation timesteps. They are not same as $\mathcal{T}_i$ (or rather the timesteps we use for the filtered DNS training data) since the filtered DNS trajectories are downsampled by ~10x. As the SDE is meant to capture the fluctuations in the fast timescale, in each LES step the SDE evolves forward by ~10 timesteps. We do rescale the timesteps to $[0, 1]$ but that is for simplicity and is an implementation detail. **References** 1. Deville, M. O., Fischer, P. F., & Mund, E. H. (2002). High-order methods for incompressible fluid flow (Vol. 9). Cambridge university press. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I will remain with my assessment.
Summary: This paper introduces a data-driven method for approximating the closure term in Large Eddy Simulation (LES). The closure term represents the unresolved scaling effect caused by reducing the computational grid through downsampling. In comparison to the traditional physically-informed approach, the learned closure term has the ability to automatically capture relevant features without the need for specific domain expertise or manual design. The model improves temporal resolution by employing latent space evolution with stochastic propagation. The proposed model demonstrates competitive performance compared to the Filtered DNS reference across different wave numbers, and significantly outperforms the baseline model. Strengths: - The proposed method outperforms several baselines in terms of the squared error and the accuracy of turbulent kinetic energy spectrum. - The latent space evolution implements the procedure to solve the SDE equation, which introduces the domain specific inductive bias into the model. Weaknesses: - The model's training solely relies on a single reconstruction loss, which is atypical for stochastic latent variables. The absence of prior regularization with respect to latent space implies that the stochastic component added at each temporal step acts more like a sparsity regularization, as the model is trained to minimize the stochastic effect rather than explicitly capturing the physical dynamics. Apart from prediction accuracy, there is a lack of additional benchmarks to demonstrate the advantages of the stochasticity, such as prediction diversity. - When comparing the proposed niLES with deterministic Neural Network-based models, it should be noted that niLES introduces additional parameters $h_{\phi}$ and $g_{\theta}$ as well as more computational routes, potentially leading to an unfair comparison. The authors should showcase the model's performance while varying the magnitude of the Wiener process in Equation 15 and compare it to the scenario where the noise magnitude is zero. - The current evaluation protocol does not adequately assess the scalability of the model, particularly with respect to wave numbers. To address this limitation, the authors should exclude a certain range of wave numbers as test cases, enabling a more comprehensive evaluation of the model's scalability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q: It would be better if the authors could show the stochastic property in latent space brings diverse and valid results in the LES space. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. Please find our response below. **Single reconstruction loss** We apologize for the confusion. Eq. (19) indeed contains only the reconstruction loss, which is only a fraction of the training loss we ultimately use for training (see line 218 right after Eq(18)). Eq. (30) in Appendix D shows other loss terms related to the latent space. We will refactor this section and add these additional terms to the main text. **Prediction diversity** This is a great suggestion. We have added a plot in the uploaded PDF (Fig 3) to show how the rollouts in the LES space vary due to the stochastic variation of the latent space trajectories. **Magnitude of the Wiener process** We are adding a new baseline where the magnitude of the Wiener process is zero; see Figs 1,2,5 in the attached PDF. This corresponds to a deterministic NN whose latent space dynamics are parameterized as a neural ODE. We added additional numbers of layers (3x more layers) in the neural ODE drift function to compensate for the fact that the neural SDE contains additional parameters corresponding to prior drift and diffusion. The resulting neural ODE-based architecture has strictly more parameters than our proposed model niLES. **Scalability of the model** Thank you for your feedback on increasing the comprehensiveness of the evaluation. However, we are not fully sure that we understood what you mean by “exclude a certain range of wave numbers”. Could you please clarify your comment? To broaden our evaluation, we have added several more test cases for the evaluation of our model. In particular, for the Kolmogorov dataset we have included the RMSE and TKE spectrum for 6 independent examples. (See Figs 1 and 2 in the attached PDF.) Additionally, we have included another scenario – cylinder wake at Reynolds number 500 (some more details are in our response to reviewer SnNJ). This scenario has more than 5x the degrees of freedom (52K vs 9.2K) in the DNS simulation compared to the Kolmogorov flow and exhibits irregular, chaotic vortex shedding. The LES simulation in the cylinder wake case contains 7.7K degrees of freedom compared to 2.3K in the Kolmogorov LES field. For the cylinder wake dataset, we have included the average RMSE and the spread (min / max) as obtained by our method and the baseline methods. See Fig 5 in the attached PDF. We hope this expands the evaluation satisfactorily. --- Rebuttal Comment 1.1: Title: Further questions Comment: I appreciate the authors’ efforts to address my inquiries and share results related to neural ODE and the description of sample diversity. Having gone through the feedback provided by the authors, I have further questions regarding the results, particularly on the comparison to deterministic methods. (1) Concerning the training objectives, the introduced approach learns features in the latent space through VAE-like objectives. Yet, it outperforms deterministic methods when considering reconstruction error. This seems somewhat unexpected to me, given that a standard VAE often yields blurred results and struggles to retain high-quality details. (2) Referring to Figure 4, the deterministic NN (d) seems to generate a sharp output, but it concurrently leads to high-frequency artifacts. My impression is that these artifacts might be attributed more to specific model configurations and training issues rather than an inherent property of deterministic NNs. (3) When it comes to assessing the diversity of samples, how can one accurately evaluate their correctness beyond mere qualitative visualization? --- Reply to Comment 1.1.1: Comment: Thank you for the positive comments regarding our responses and your thoughtful follow up questions. Please find our replies below, and let us know if you have any further questions. (1) We compare the reconstruction errors from long-term rollouts. For short-term rollouts (i.e., when the model is unrolled only 8 times, and that is identical to the setup during training) the deterministic approaches and our method yield similar reconstruction errors. In a nutshell, the closure model learns to correct the coarse solver. You have noted correctly that VAE-like objectives have been shown to have a ‘smoothing’ effect on the predictions, which also aids in generalization. However, in our method, the LES samples are not directly samples from the VAE, but rather the VAE-like samples form the fluctuations or corrections to the LES solver. Therefore, the smoothing effect of a VAE does not necessarily translate to the smoothing in the LES field samples (because we do not use those samples as the final output). In fact, we believe that the smoothing-like effect in the fluctuation-space, which dampens the overcorrection of deterministic methods, is crucial to preventing the unphysical energy buildup. This latter energy buildup prevents the highly chaotic system from retaining high-quality details and leads to loss of accuracy over the long term. (2) As you note correctly, the deterministic NN approaches maintain a sharp output over the short term, but over the long term this leads to instabilities and unphysical artifacts. The high-frequency artifacts are caused due to the unphysical build up of turbulent energy over time at the higher frequencies (see Fig 3 in the main paper). An extremely well-tuned deterministic NN might be able to attenuate those issues but such unphysical buildup would eventually lead to the simulation ‘blowing up’. This phenomenon is well known in computational fluid dynamics, and it is usually handled by incorporating a filtering stage that removes spurious high-frequency energy at each time-step (as is done in the Implicit LES). Such filtering is based on the spectral properties of the time-stepper, and it is tailored to remove only the spurious components, thus maintaining the high-accuracy of the solver in a stable manner. In an abstract level, we can argue that in our formalism the averaging of the fluctuations is analogous to such filtering. The inability to retain high-quality physical features precisely highlights the issues of treating the learning signal as a deterministic LES field. In other words, we view the better performance largely due to the probabilistic formalism of learning LES, which is now tractable through neural-SDE based modeling. (3) You have raised a good point: one possibility is to compute a large number of ‘nearby’ DNS trajectories, and filter them to LES fields. We can then use our samples to compare the generated distribution to the ensemble of the LES fields. While this is theoretically plausible, we need to overcome the challenge of computing nearby DNS trajectories and comparing high-dimensional distributions, which is computationally expensive. We agree that these comparisons would be fruitful directions to explore in future work.
Summary: This submission proposed a data-driven method to learn a.closure model to simulate the results from DNS. The key part is a latent stochastic process by Neural SDE. And finally compute the Monte-Carlo approximation. Strengths: 1. The model treats the DNS as a stochastic process, instead of a deterministic process as in many previous works. 2. Empirical results indicate it performs well on Kolmogorov flow. Weaknesses: 1. The number of datasets is small. 2. It lacks recent deep learning-based methods as baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Since the work treats DNS field as stochastic, why can you compute RMSE in the figure 3? 2. Why do you design a stochastic process on latent space, instead of directly on original field? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See questions and weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. **RMSE in Fig 3** You are correct that we treat DNS fields stochastically. However, the ideal LES field is deterministic, which is approximated by a filtered DNS instance, and is a valid approximation for short time rollouts. Over longer time horizons, however, chaoticity dominates. In that regime RMSE ceases to be a meaningful metric; here we have provided statistical error metrics such as Turbulent Kinetic Energy (TKE) spectrum. **Baselines and number of datasets** We have added cylinder wake as an additional dataset showcasing the performance of our method. In addition, we added another deterministic NN-based LES as baseline using the Neural-ODE framework. Please see our response to reviewer SnNJ. The conceptual framing of using SDE for probabilistic LES is the first kind to our best knowledge, so we do not have such probabilistic NN-based baselines to directly compare to. We hope our method becomes a baseline for future work. **Why not latent space on the original field?** The main objective of LES modeling is the reduction of the computational cost. In general, the LES formulations tend to be more expensive at the same resolution than the original systems. However, the cost reduction is achieved by using a much coarser grid compared to the original one, which, compounded by the larger time-steps, results in an overall cheaper method. In our case, running an SDE at the original resolution would be equivalent to running the DNS, which would ultimately defeat the purpose of LES. The main insight/inductive bias we would like to incorporate is that: instead of running multiple DNS trajectories, we assume that there is a low-dimensional latent space where we can efficiently sample multiple short-term trajectories; and whose aggregated statistics remain close to the statistics stemming from directly computed DNS trajectories. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' reply and new results. I think it solves my concerns well. I will keep my score.
Summary: This paper targets learning turbulence closure models for RANS simulations via neural networks. The paper proposes to use a neural SDE on the latent space of a transformer to predict different samples from the distribution of the next state, and then compute an average over these. This process is unrolled and trained for a sequence of multiple steps. Strengths: Overall, this is a good idea, and I'm not aware of a an NSDE being previously used in this form. Thus, I see the general direction of the paper and the promising approach as strong points. Weaknesses: On the other hand, the paper targets a single, two dimensional Kolmogorov flow secnario as the only test case. In addition, only a single deterministic NN is compared to (plus an implicit LES solver). For this single data set, the paper is lacking a stable evaluation: multiple, differentily initialized models evaluated across multiple tests to obtain a stable result are not evaluated. In addition, NeurIPS is a very broad ML venue. Turbulence is definitively an exciting topic here, but even more important would be a broader evaluation, ideally with substantially different secnarios to show that the method has merit beyond turbulence. In its current form, I don't think that the results are sufficient for a NeurIPS paper. I see two ways to improve this aspect of the submission: either the authors focus their writing on the turbulence secnario (cf. below), and present multiple scenarios in this context, or non-turbulence cases are included to broaden the scope. A more stable evaluation with multiple models and tests should be included either way. This potentially also could help to show the benefits of the method more clearly. Right now the gains in terms of accuracy and the differences in the TKE spectrum seem to be mild. Additional cases could show areas where the approach gives larger improvements. In addition, I would also recommend that the authors include additional learned baselines. This is less crucial, but would nonetheless help to put the work into the context of previous methods at NeurIPS, ICLR & co. I also do want to mention two weak points in the writing. One is that I found the motivation (esp. L44) quite unintuitive: the "inductive bias of the LES field" is not very clear, and the summary implies this is "simply" a matter of choosing the right architecture. Things get clearer afterwards, but reading the paper front to back, I think this summarizing question is not helping a reader. The conclusions are also not a good fit with the rest of the paper: suddently, a transition is is made to generic chaotic systems. The whole previous paper targets a single specific scenario in the form of Kolmogorov turbulence, and presents a single set of results. Hence, this outlook is not supported by the content of the paper. I can understand that the authors have hopes that their method will at some point in the future generalize to other cases, but it should be made clear that this is an outlook. Rather, references to specific works where the authors see potential would be interesting to give here. Overall, I want to encourage the authors to continue their direction of work. Nonetheless, I find it difficult to directly argue for accepting this paper in its current form. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Which other, specific applications and scenarios do the authors see for their method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed briefly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and detailed feedback. We appreciate your encouragement and your suggestions on how to improve the quality of the manuscript. **More scenarios** We have included in the uploaded PDF an instance of our proposed methodology applied to cylinder wake at a high Reynolds number of 500, where the system exhibits irregular vortex shedding and chaotic flow. The difficulty of such an example is two-fold [3]: the need of a nonuniform mesh that is usually refined near the cylinder, and the need to capture the boundary layer near the cylinder. We hope this adds diversity to the turbulent flows that this method could be applied to. See Fig 4 in the uploaded PDF for a qualitative rollout and Fig 5 for the accuracy plots with the new dataset. We have also shown the spread (min/max values) among 10 testcases in the dataset. **Stable Evaluation** Thank you for the suggestion. For both the Kolmogorov datasets, we have added evaluations across 6 different test cases (See Fig 1 and Fig 2 in the attached PDF) We plot both the RMSE and the TKE among the 6 test cases. While our training method unrolls for only 8 steps, the evaluation is unrolled to 100s of steps, well beyond the training horizon. **Why turbulent flows?** > I see two ways to improve this aspect of the submission: either the authors focus their writing on the turbulence secnario (cf. below), and present multiple scenarios in this context, or non-turbulence cases are included to broaden the scope Thank you for the comment. We target turbulent flows as a prototypical chaotic system with an impact in real world applications encompassing engineering and science, such as weather and climate. In fact, many of the difficulties inherent to chaotic systems manifest in turbulent flows. For example, maintaining long-term statistics accurately (such as turbulent kinetic energy) in turbulent flows is especially challenging. Many data-driven approaches become unstable for longer rollouts [1,2]. Your suggestion of investigating the method’s applicability to other flows is appreciated and as mentioned above we have added a new setup for such flows. We will modify the text in order to convey this point more clearly. **Gains in Accuracy and TKE** While the gains in accuracy shown in Fig. 3 (of the manuscript) are seemingly modest, achieving similar gains using traditional computational techniques used in turbulent flows would require the computation of an expensive direct numerical simulation (DNS). For instance, using a high-order DNS that achieves a similar quality is roughly two orders of magnitude slower. To the best of our knowledge, similar works on Neural Network-based turbulence closure models do not achieve such gains at such a high Reynolds number (20,000) while being stable for long beyond the training horizon. Furthermore, among spectral element methods, the Implicit LES method is considered to be the state-of-the-art [4]. Even against this approach we have demonstrated that our method produces more accurate simulations while exhibiting long-term stability. **Additional baselines** Thank you for the suggestion. We are including a Neural ODE model [5] as a learned baseline for deterministic LES. As you pointed out, one of the novel aspects of our approach is leveraging the probabilistic formalism of LES to design the algorithmic pipeline of our method. To the best of our knowledge, this has not been proposed before, therefore, we were not able to find other probabilistic LES approaches to be used as learned baselines. **Writing** We acknowledge your feedback on the clarity of the writing. We will properly nuance the outlook section of the paper, particularly the connection of the results shown in this manuscript and the broader area of chaotic dynamics. We will also shift some of the explanations through the paper to better articulate how the motivation and modeling insights guided our choices in the algorithmic pipeline, in particular the choice of an NSDE in latent space to simulate DNS efficiently as a closure model. **References** 1. Beck, A., & Kurz, M. (2021). A perspective on machine learning methods in turbulence modeling. GAMM‐Mitteilungen, 44(1), e202100002. 1. Moser, R. D., Haering, S. W., & Yalla, G. R. (2021). Statistical properties of subgrid-scale turbulence models. Annual Review of Fluid Mechanics, 53, 255-286. 1. Williamson, C. H. (1995). Vortex dynamics in the wake of a cylinder. In Fluid vortices (pp. 155-234). Dordrecht: Springer Netherlands. Bosshard, C., Deville, M. O., Dehbi, A., & Leriche, E. (2015). 1. UDNS or LES, that is the question. Open Journal of Fluid Dynamics, 5(04), 339. 1. Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in neural information processing systems, 31. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I’d like to thank the authors for the comments and the updated results. The cylinder case is very good to see! In terms of evaluation, I would encourage the authors to provide an averaged evaluation across multiple trained models (with different random seeds) for a final version. Nonetheless, I’d be happy to support an accept for this paper, and I’ve raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback. We will update the final version with the new dataset and expand the evaluation with multiple trained models initialized with random seeds.
Rebuttal 1: Rebuttal: We thank all reviewers for providing thoughtful reviews and constructive feedback. We are encouraged by the positive comments that our method is well-motivated, clearly presented and provides an important direction to explore in the area of data-driven turbulence closure modeling. To address the reviewers’ concerns regarding the need for a broader evaluation, we have made the following high level changes: - **Additional learned baseline.** We have added an additional deterministic NN-based baseline method: this architecture uses the same encoder-decoder but uses a Neural ODE based latent evolution. This corresponds to zeroing out the stochasticity due to the Wiener process in the Neural SDE formulation. Furthermore, to address the concern that the number of parameters in the Neural SDE might be higher because of the prior, posterior drift and diffusion, we have used 3x more layers (12 vs 4) in the drift function of the Neural ODE to compensate for this effect. We have reevaluated the test cases for Kolmogorov flow in Figs 1 and 2 of the attached PDF including the new baseline, and also included four additional testcases from the Kolmogorov flow dataset. - **Additional turbulence scenario.** We have added an additional dataset – cylinder wake, which is a challenging instance of a chaotic Navier Stokes flow. In addition to its complex geometry, this flow has a high Reynolds number of 500, which is known to exhibit irregular vortex shedding. We have added the evaluation pipeline for our method and the baselines using this new dataset. Our response to reviewer SnNJ contains more details. See Fig 4 in the attached PDF for a qualitative plot and Fig 5 for accuracy plots. Furthermore, we have addressed (or will address in a final version) the comments regarding typos and clarity issues pointed out by the reviewers. We hope that our explanations are satisfactory, and we are happy to answer any further questions that the reviewers may have. Pdf: /pdf/49b0ca4529b178590c7d1f011dd0b50eee18b10b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FedNAR: Federated Optimization with Normalized Annealing Regularization
Accept (poster)
Summary: This paper delves into the effect of weight decay in the realm of federated optimization. The authors conduct a series of experiments highlighting how specific elements in federated learning, such as the presence of diverse data and the execution of local updates, can amplify the influence of weight decay. The paper further provides a theoretical exploration of weight decay within federated learning and introduces a novel methodology named FedNAR, which is derived from their analysis. The newly proposed FedNAR method showcases enhanced convergence speed and performance on various simulated federated tasks, and notably, it displays a heightened tolerance to complications introduced by weight decay. Strengths: 1. The writing and presentation are generally of high quality, making the idea and method of this paper easy to follow. 2. The research includes sufficient empirical findings. Six different federated learning algorithms, encompassing various variants of FedAvg, are examined to investigate the impact of weight decay. Weight decay is explored across a wide range, providing clear insights into their findings. 3. The proposed algorithm, FedNAR, is well-motivated by empirical findings and appears to be a simple yet effective enhancement. It can be readily adapted to different federated learning backbone algorithms. 4. The study offers comprehensive and rigorous theoretical analysis of weight decay in federated learning. The results are novel, crucial, and hold the potential to catalyze further investigations within the federated learning community. 5. Moreover, the paper’s experimental design is comprehensive, encompassing a broad spectrum of scenarios that integrate both vision and language datasets along with a variety of hyperparameter configurations. Weaknesses: 1. The bound is similar to previous works, without providing any superior bounds. This may not be a significant issue, since incorporating weight decay itself makes theoretical analysis harder and can lead to convergence challenges. 2. Though this is not my major concern, it would definitely make the paper stronger to try experiments at larger scales, e.g., GPT fine-tuning on various downstream tasks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. As demonstrated in the paper, weight decay plays a critical role in federated learning. Is there a more effective approach to determining the optimal weight decay value for FedNAR? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: As illustrated in weakness, the final bound is similar as previous works. What’s more, it would be better if the algorithms can be verified in more real-world heterogeneous data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Theoretical results Our theoretical analysis is constructed within the broader **non-convex** framework and is underpinned by **minimal** assumptions. This accomplishment is significant given the intricacies of federated optimization coupled with non-iid data distributions. The bounds we derive represent a **generalized** rendition of earlier findings – specifically, by setting the weight decay term to 0, we are able to recover prior outcomes. Furthermore, a sublinear bound can be achieved by setting the data heterogeneity parameter to 0, akin to previous investigations. In addition to these considerations, it's worth noting that the inclusion of weight decay introduces an additional layer of complexity, posing challenges for our theoretical analysis. This further underscores the non-trivial nature of our theoretical results. ## Additional large-scale datasets We've expanded our experimentation to encompass more expansive and intricate datasets, including CIFAR-100 and Tiny-ImageNet. For a comprehensive understanding of these endeavors, kindly refer to **Table 1** in the global PDF. Due to temporal and resource constraints, certain tasks such as GPT fine-tuning will be pursued in our future work. ## Hyperparameter selection FedNAR serves as a versatile plug-in compatible with a wide array of Federated Learning algorithms. Therefore, it suffices to employ a similar value as used in the original backbone algorithms, which can be conveniently determined through grid search, similar to practices adopted in previous studies. Consequently, there is no additional burden in the realm of hyperparameter selection for FedNAR. Notably, as evidenced by Figure 4 in the submission, FedNAR exhibits heightened robustness to the initial weight decay, implying **greater flexibility** in hyperparameter selection due to the efficacy of our co-clipping strategy. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It resolved most of my concerns and I will raise my score.
Summary: The study discusses the role of weight decay in enhancing generalization performance in deep neural network optimization and in avoiding overfitting in Federated Learning (FL). The authors highlight the influence of weight decay value on FL algorithms' convergence. To mitigate this issue, Federated optimization with Normalized Annealing Regularization (FedNAR), is introduced, which modulates each update's magnitude through co-clipping of the gradient and weight decay. The algorithm shows improved accuracy in experiments on diverse vision and language datasets with various federated optimization algorithms. Strengths: - The motivation of the paper and the corresponding solution is straight-forward. - The article is clearly articulated and readily understandable. - This paper theoretically analyzes the impact of local training's weight decay on the convergence of federated learning. - Experiments on both image and text datasets validate the effectiveness of the proposed method. Weaknesses: - To further validate the effectiveness of the proposed method, it would be beneficial to conduct experiments on more challenging datasets such as CIFAR100 and Tiny-ImageNet. Additionally, tests on more realistic benchmarks, like LEAF, that encompass feature disparity or imbalanced data, would provide even stronger evidence of its efficacy. - From the results in Figure 4, it is not clear whether the proposed method is more stable for the choice of the initial value of weight decay. In a similar vein, an ablation study on the choice of the threshold of co-clipping is required. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is the proposed method also effective on the methods that utilize client-specific learnable parameters such as FedDC [1]? [1] L. Gao, et al., FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling and Correction, CVPR 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to "Weakness" Section Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Additional large-scale datasets We have integrated experiments involving CIFAR-100 and Tiny-ImageNet. Kindly refer to **Table 1** in the global PDF for a detailed presentation of the outcomes. Our FedNAR consistently upholds its superior performance with about **2%~7% improvement** across all these more intricate datasets. **Experiment details:** For CIFAR-100 and Tiny-ImageNet, we ensured uniformity by retaining the same settings, parameters, and model configuration employed for the CIFAR-10 dataset, as elucidated in Section 5.1, lines 270 to 281 of our initial submission. In the case of Tiny-ImageNet, where the input size is 64x64, a minor adaptation was made to the ResNet-18 model. This modification involved transitioning the final pooling layer from avg_pool2d to adaptive_avg_pool2d to align with the altered image dimensions. **LEAF benchmarks:** In fact, the Shakespeare dataset utilized in our present submission originates from LEAF. We have employed the publicly available codes from LEAF to create the dataset, resulting in a pragmatic partitioning of the data. We will include more datasets from the LEAF benchmark in future work. ## Additional ablation studies **Stability with respect to initial weight decay:** The primary objective of Figure 4 is to illustrate the enhanced stability of FedNAR in relation to different initial weight decay selections. Upon examining Figure 4, it becomes apparent that an initial value of 0.1 is substantial, leading to a decline in performance when contrasted with the choice of 0.01 across original FedAvg, FedProx, and SCAFFOLD methodologies. However, with the incorporation of FedNAR, while there is an initial dip in accuracy compared to the 0.01 choice, accuracy gradually recuperates and even surpasses the performance of the 0.01 option. This dynamic demonstrates that FedNAR possesses the capability to autonomously rectify the detrimental impact of an excessively large initial value. In light of the observations outlined in Section 3, where marginally elevated weight decay values result in noticeable performance degradation, FedNAR emerges as a natural and efficacious remedy for this concern. **Ablations on the co-clipping threshold:** Ablation studies have been incorporated to analyze the impact of different co-clipping thresholds. The thresholds chosen for evaluation include {5, 10, 20, 40}. Please refer to **Figure 1** in the global PDF for details. The findings underscore a discernible trade-off associated with selecting the maximum norm threshold. Opting for a lower threshold results in amplified regularization, constraining each optimization step and consequently affecting performance negatively. Conversely, a higher threshold leads to less frequent occurrences of clipping, thereby yielding a milder effect. However, it's noteworthy that our results demonstrate a consistent pattern across all three backbone algorithms: regardless of the chosen maximum norm threshold, FedNAR consistently exhibits better performance. ## FedDC-NAR Yes, FedNAR is also **effective** on FedDC. We undertake experiments employing the publicly accessible official codes of FedDC, encompassing both CIFAR-10 and CIFAR-100 datasets. Data is partitioned across 100 clients according to a Dirichlet distribution, with an imbalance parameter from {0.3, 1, 10}. We train ResNet-18 across 500 rounds, where 20 clients are randomly selected per round. The training protocol employs a learning rate of 0.01 and an initial weight decay of 0.005. Comprehensive outcomes are presented in **Table 5** of the global PDF. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions in detail. Most of my concerns are resolved and I will keep my original score.
Summary: This paper first describes an important and challenge problem in Federated Learning, that the performance of FL is very sensitive to the choice of weight decay hyper-parameter for local optimization. The authors produced data to demonstrated the sensitivity of the weight decay hyper-parameter. This paper first outlined a general analysis framework for any client-side learning rate and weight-decay adjustment scheme, then proposed a adaptive weight decay scheme which adjust the hyper-parameter inversely proportional to the magnitude of the sum of the local gradient and the decayed-parameters. The authors provided analysis and performance guarantees for their proposed adaptive decay method. The authors conducted experiments to demonstrate the applicability of their method as a "plug-in" for different types of FL methods. Strengths: 1. hyper-parameter sensitivity is often a over-looked issue in FL. Existing methods addressed this problem by a hyper-parameter optimization problem, which is costly. This methods directly make the weight decay parameter and adaptive, eliminate the need for search. 2. The authors have shown the robustness of their method as a plug-in on various FL methods, and the robustness to the initialization of the decay value, which is critical to make it actually useful. Weaknesses: In the experiments, the authors ran 5 local epochs for each algorithm. When the FL training process starts from random initialization, often single local epoch produces the best results because it limits the client drift away from the server model. I would like to see how does the proposed method perform under single local epoch. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. the authors have shown the robustness of their proposed adaptive weight decay scheme to the initial weight decay choice. But the scheme also depends on the learning rate schedule l_t, and decay schedule \mu_t, and the maximum norm A. In the experiments, the author set A = 10. How does these HPs affect the proposed method? 2. FedProx is similar to weight decay, but shrinkage to the server parameter instead of 0. Can the proposed method be used for adapting the proximal regularizer weight in FedProx? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Single-epoch performance Yes, FedNAR is also **effective** in a single-epoch setting. We perform experiments using CIFAR-10 data, employing a data heterogeneity parameter of 0.3. Our approach adheres to the identical configurations, parameters, and model specifications detailed in Section 5.1, lines 270 to 281 of our original submission. The selected backbone algorithms encompass FedAvg, FedProx, and SCAFFOLD. The outcomes underscore that FedNAR consistently attains superior performance and accelerates convergence on all the aforementioned backbone algorithms. For a comprehensive overview, kindly refer to the results provided in **Table 3** of the global PDF. ## Additional ablation studies We expand our ablation studies to encompass learning rate scheduling, weight decay scheduling, and max norm exploration. For each of these ablation studies, the backbone algorithms include FedAvg, FedProx, and SCAFFOLD. Please refer to **Table 2** and **Figure 1** in the global PDF for the detailed results. Regarding the **learning rate**, our current submission employs exponential decay in line with prior FL research practices [24, 26]. For further ablation studies, we have additionally evaluated cosine decay and inverse linear decay, where the first is common in standard centralized training and the second is proposed in [11]. Regarding **weight decay** scheduling, in conjunction with the exponential decay approach applied in our current submission, we also include cosine decay and inverse linear decay methods for an extended evaluation. Regarding the **maximum norm** parameter, we opt for values of {5, 10, 20, 40}. The outcomes demonstrate a discernible trade-off concerning the selection of this parameter. A lower maximum norm tends to amplify regularization, constricting each optimization step, which in turn adversely impacts performance. Conversely, a higher maximum norm results in less frequent clipping occurrences, yielding a diminished impact. Notably, our results indicate that across all three backbone algorithms, and for every designated maximum norm value, FedNAR consistently outperforms the baseline algorithm. ## Apply FedNAR in the proximal term of FedProx Yes, the idea of FedNAR can be seamlessly extended to the proximal term of FedProx. The implementation is similar. We proceed to replicate experiments on CIFAR-10, adhering to the same setup outlined in Section 5. To enhance the validation of this notion, we modify the coefficient of the proximal term ($\mu$) across {0.001, 0.01, 0.1}. The outcomes are presented comprehensively in **Table 4** of the global PDF. **Deeper analysis about the extension on FedProx:** The results showcase that, across all $\mu$ values, the inclusion of co-clipping in the FedProx framework yields improved performance. However, it's noteworthy that the extent of enhancement is not as conspicuous as when this technique is applied to weight decay. This disparity could be attributed to 1. The impact of FedProx loss and weight decay varies. While the FedProx loss confines the gap between the current local weight and the global weight, effectively curbing the extent of the local optimization step, this inherently mirrors the effects of the co-clipping scheme in empirical terms. Consequently, the incorporation of co-clipping doesn't provide substantial additional benefits in such a scenario. 2. In the context of FedProx, the proximal term represents the L2-norm of the disparity between the current local weight and the global weight. This term is anticipated to exert a relatively lesser impact compared to the current weight itself. 3. The co-clipping approach within FedNAR is intentionally tailored to complement weight decay, substantiated by both empirical findings in Section 3 and theoretical analyses presented in Section 4. In particular, our observations in Section 3 underline that the combination of multiple local updates and the inherent unevenness of data distribution in FL scenarios heightens the sensitivity of FL algorithms to weight decay. Additionally, the insights provided in Section 4 illustrate that co-clipping is pivotal for convergence analysis when coupled with weight decay. It's important to note that these attributes cannot be directly extrapolated to analyze the proximal term within the context of FedProx.
Summary: The paper investigated effects of weight decay in the scenario of federated learning, especially for the stage of local updates. It was found that even subtle changes of weight decay values might lead to drastic performance drop and the observations motivated the authors to conduct convergence analysis considering the factor of weight decay. The proposed method, FedNAR, was supported by both theoretical analysis and empirical results, and demonstrated advantages when incorporated into various FL backbone algorithms. Strengths: - The paper is clearly written and well organized. In addition, the idea was motivated and inspired by an empirical study of weight decay in Section 3, which made the paper more sound. - The authors investigated the influence of changes in weight decay of local updates, which was overlooked previously and found that the model performance was sensitive to the selection of weight decay values. - A theoretical analysis of convergence with weight decay was presented in the paper, and supported the proposed FedNAR method Weaknesses: - It seemed that FedNAR worked well on most of federated learning backbone algorithms except for some adaptive methods such as FedAvgM and FedAdam. The accuracy was even worse after applying FedNAR to these methods. Can the authors expand on this phenomenon and provide some insights? Do adaptive methods themselves have the ability to adjust update trajectories so that adaptive weight decay might not work as expected? - Experiments were still at a small scale and can be further improved. Currently the authors only chose one dataset CIFAR10 for image classification and one dataset Shakespeare for next-character prediction, which was not sufficiently. More large scale datasets such as ImageNet and practical ones with more realistic splits like CelebA (as suggested in the LEAF benchmark), or Apple's FLAIR dataset. Besides, details of the model architecture for each task were not clear. The selection of model structures might also affect the final performance and should be analyzed as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why did FedNAR perform worse on some adaptive FL methods such as FedAdam and FedAvgM? - Missing details about model architectures and more experiments should be included to support the proposed method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations in Section 6 in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## FedNAR and adaptive algorithms: To commence, we wish to emphasize that FedNAR is also **effective** for adaptive algorithms such as FedAdam and FedAvgm. Please consult our Table 1 and Table 2 in the original submission for reference. Among the 10 settings about FedAdam and FedAvgm covered by these tables, FedNAR demonstrates slightly inferior performance in merely 2 instances out of 10. Consequently, it remains evident that FedNAR consistently delivers enhanced outcomes within the context of adaptive methods. **Deeper analysis of adaptive algorithms and FedNAR:** These two algorithms employ update mechanisms based on momentum at the global level. This implies that the global update direction incorporates an accumulation of preceding global updates. This cumulative effect aids in rectifying current updates by counteracting deviations stemming from specific selected clients. In essence, momentum-based methodologies themselves possess attributes akin to those introduced in our FedNAR approach, which aims to control the local drift due to multiple local updates and data heterogeneity. Nevertheless, it's noteworthy that FedNAR brings forth additional enhancements, despite the inherent benefits of momentum-based techniques. ## Additional large-scale datasets Our current submission includes CIFAR-10 and Shakespeare datasets, with the Shakespeare dataset already incorporating a practical LEAF-based setup. Additionally, we have performed experiments on CIFAR-100 and Tiny-ImageNet, both of which are also widely-used benchmarks in federated learning studies. The outcomes are presented in **Table 1** of the PDF within the global response. Our FedNAR consistently maintains its superior performance, with about **2%~7% improvement** across **all** these larger and more intricate datasets. While being mindful of time and computational limitations, we intend to expand our experimentation to include CelebA and Apple's FLAIR dataset in future work. **Experiment details:** For CIFAR-100 and Tiny-ImageNet, we maintained the same settings, parameters, and model utilized with the CIFAR-10 dataset, as outlined in Section 5.1, lines 270 to 281 of our initial submission. In the case of Tiny-ImageNet, as the input size is 64x64, we made a minor adjustment to the ResNet-18 model by changing the final pooling layer from avg_pool2d to adaptive_avg_pool2d to align with the image dimensions. ## Model details For vision tasks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, we use a standard ResNet-18 following previous works, as mentioned in Section 3, Line 125. For language tasks, we use a standard transformer encoder model. Specifically, we add positional encodings to the input, forward it to the 6 attention layers, and use a fully connected layer for the final prediction. Please refer to the function *class shake_transf* in the *util_models.py* in our released codes for details. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It resolved most of my concerns and I will keep my original score.
Rebuttal 1: Rebuttal: We express our sincere gratitude to the reviewers for their constructive and encouraging feedback. Dedicated to the continuous refinement of our work, we wish to emphasize a key contribution of our paper: **it represents the pioneering effort in systematically exploring the significance of weight decay in existing FL methods, underscored by both rigorous theoretical analysis and comprehensive experimentation - which we think are generally applicable beyond our proposed FedNAR method. Stemming from our insights, we introduced FedNAR—an essential algorithmic element that can seamlessly integrate with existing FL techniques, enabling adaptive adjustments of weight decay for enhanced convergence and model quality.** Given the time constraints of this rebuttal period, we have undertaken notable expansions in our experiments in terms of: (1) assimilating a wider and more complex range of datasets, (2) deepening our ablation studies, and (3) presenting new algorithmic innovations. The updated findings are detailed in the attached PDF. Pdf: /pdf/8796d618317071b0d2d6fb632bc7c1f2efab4e7e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF
Reject
Summary: This work proposes a new attack on monocular 3D object detectors utilising NERF representation to create multi-view consistent attacks utilising Lift3D [26]. The authors show successful attacks on nuScenes dataset and the effect of their design choices on the attack success. Furthermore, a mitigating strategy is shown how to make monocular 3D object detection robust to these types of attacks. Strengths: This manuscript includes a comprehensive set of ablation studies and detailed analyses, which effectively highlight the influence of different components in the proposed pipeline. Figure 4 is especially illustrative and insightful. Notably, the successful tackling of numerous molecular 3D detectors, each with varying detection strategies, is commendable. The authors present an effective strategy to counteract the proposed adversarial attacks on molecular detection systems. The overall presentation of the paper is lucid and the figures contribute significantly to accurately conveying the intended ideas. Weaknesses: 1- A comparison against baselines is missing, most notably the mesh-based baseline [43]. This comparison is very important since the proposed setup in Lift3D [26] is very similar when constraining the Nerf to shape and texture latents , resembling the mesh generation and texturing scheme in [43]. 2- A crucial evaluation protocol has been overlooked. In the context of adversarial attacks, the imperceptibility of the attack is a significant factor; how discernible is the attack in contrast to the clean sample? This critical information is absent in this work. It leaves us questioning the perceptibility of image corruption in terms of pixel alteration. If the corruption is so pronounced that even a human observer fails to detect the cars, can it still be considered a successful attack? Previous studies typically provide this information [43]. Moreover, the inclusion of tests on KITTI should be considered vital to the evaluation process. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Despite some concerns, the work presents an innovative methodology for crafting adversarial attacks on the crucial application of monocular 3D object detection in autonomous vehicles. The manuscript is well-structured and successfully demonstrates attacks, supported by a wealth of insightful analysis. However, the absence of comparisons with similar baselines and the lack of detailed evaluation metrics assessing imperceptibility prevent me from fully endorsing the paper at this stage. I would appreciate further clarification from the authors on these points before finalising my decision. Minor Comments: The authors could consider including additional related references [a,b] that were overlooked. An animated presentation of a KITI video demonstrating successful attacks would be an informative addition. [a] AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds, ECCV 2020 [b] SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications, AAAI 2020 --------------------------------------------- post rebuttal thoughts : The authors have addressed my concerns regarding the baselines and the imperceptibility of the attack in the rebuttal. I dont think real-life applications are necessary here ( as other reviewers think) and hence I increase my score to accept Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed feedback, and for appreciating that our work is innovative, well-structured and provide insightful analysis. Below, we reply to individual questions and comments raised by the reviewer: **(1) Mesh Comparison.** This is a good point. We add an experiment using a ShapeNet car model as a mesh baseline. We use PyTorch3D’s differentiable renderer and optimized the vertex color as an adversarial example to attack 3D detectors (BEVDet). Just like the setting of the NeRF counterpart, we randomly render the mesh model and paste the patch onto the original images. The attack performance of mAP and NDS is slightly lower than the NeRF counterpart. This may be attributed to the latent space of NeRF weights has a higher dimensional representation than vertex color and providing much more solutions for attacking, which results in a better attack performance. | Method | NDS | mAP | | ----------- | ----------- | ----------- | | Clean | 0.3822 | 0.3076 | | Mesh attack | 0.3018 | 0.2183 | | NeRF attack | 0.2648 | 0.1895 | **(2) Perceptibility.** Thank you for bringing this valuable feedback. Compared with [43,44], our adversarial examples are less likely to be spotted by humans, as we carry non-contact attacks, have feasible 3D shapes as usual vehicles, and display camouflage adversarial texture. In addition, in Fig. 3 (a) of the main paper, we demonstrate that our adversarial example is realistic enough to be detected by 3D detectors, as well as by human eyes. We also add statistics investigating the influence of distance and pixel proportion. In the setting of 3D vision, the farther the distance, the smaller the pixel proportion (size of patch / (1600*900)). From the below table, we can observe that as the distance increases, the pixel proportion decreases and the attack performance relatively decreases as well. This indicates that the higher the perceptibility (larger pixel proportion), the better the attack performance. | Distance (m) | 10 | 11 | 12 | 13 | 14 | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | Pixel proportion (%)| 4.67 | 3.67 | 2.96 | 2.42 | 2.04 | | Performance drop (%)| 41.47 | 39.56 | 37.11 | 34.74 | 32.52 | **(3) Additional References.** Thank you for your advice. We will revise the Related Work section to include a discussion of these relevant references. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the great rebuttal and effort for real-world example. I increased my score to 7. --- Reply to Comment 1.1.1: Title: Thanks for your positive feedback Comment: We are glad that we have addressed your concerns. We thank you again for the valuable comments.
Summary: This work develops adversarial attacks against 3D object detectors by utilizing instance-level NeRFs. They start with a representation of a vehicle, parameterized by a NeRF that predicts both geometry and texture and render the vehicle into a image, which they compose into the original image by copy-pasting. They use the composited image to adversarially attack 3D object detectors, which provides a gradient signal used to optimize the NeRF (texture only). Experiments show their adversarial examples are effective against a variety of different 3D object detectors, and they show that training on these samples improves robustness (and even overall performance). Strengths: * Novel application of NeRFs, utilizing the fully differentiability to optimize for adversarial texture. * Work is well written and overall clear to follow. * Analysis provides good insights (referring to Sec 5.3 analysis of 3D detector architecture robustness and Sec 5.4 adversarial training actually boosts performance). * Multiple architectures used in experiments. Weaknesses: * Section 4.4 could use more elaboration - this is a key section for the overall work and in the current revision is quite vague. * Some of the attacks (Fig. 3) do not look photorealistic - how can the reader be convinced these adverarial samples would actually work in the real world? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Section 4.4: How many rendered images are used? Are there any heuristics used to make sure the pasted patches are physically realizable? * What are the computational costs of generating a single attack? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors provide discussion of limitations (real world safety). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed feedback and for appreciating the novelty of our idea and clear writing. In the following, we reply to individual questions and comments raised by the reviewer: **(1) Elaborate Section 4.4.** In our adversarial training, we first infer the trained adversarial example to locally store 10,000 rendered images to avoid repeated computation. Then, we follow the standard training process of the original 3D detectors, but only modify the data processing. After sampling the original images, we randomly insert the cached adversarial patch into them while keeping ground truth unchanged. This approach optimizes the detector to neglect the effects of adversarial attacks, thereby enhancing its robustness. In addition, we find that our adversarial training not only improves robustness but also enhances clean data performance, demonstrating the effectiveness of our method. **(2) About realism.** Thank you for pointing this out. We have conducted real-world experiments during the rebuttal phase. Please refer to the one-page PDF for more information. By printing the adversarial texture and adhering it to a vehicle model, the adversarial model successfully reduces the predicted confidence, demonstrating its practicality in real-world scenarios. **(3) Number of rendered images.** We use 10,000 rendered images for adversarial training. We use the hyperparameters in Section A of supplement material to control the location and rotation of objects. We do not use any heuristics except for the proposed primitive-aware sampling and semantic-guided regularization. **(4) Computational costs.** As we perform transferable attacks, our pipeline consists of two phases: training and inference. The training phase takes approximately two days using 8 NVIDIA A100 GPUs for 5 epochs in the nuScenes dataset. The inference phase of each frame, which involves rendering a single patch, takes around 0.2 seconds on an A100 GPU.
Summary: Deep neural networks (DNNs) have shown susceptibility to adversarial examples, which raises significant safety concerns, particularly in safety-critical applications like DNN-based autonomous driving systems and 3D object detection. While there is a wealth of research on image-level attacks, most of them focus on the 2D pixel space, which may not always translate into physically realistic attacks in our 3D world. In this paper, the authors present Adv3D, the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs). The utilization of NeRFs allows for the generation of adversarial examples that possess photorealistic appearances and accurate 3D generation, thereby enabling more realistic and realizable adversarial attacks in the 3D domain. The authors train their adversarial NeRF by minimizing the confidence of surrounding objects predicted by 3D detectors on the training set. They evaluate Adv3D on an unseen validation set and demonstrate its ability to significantly degrade performance when rendering the NeRF in various sampled poses. To ensure the practicality of the adversarial examples, the authors propose primitive-aware sampling and semantic-guided regularization techniques, which facilitate 3D patch attacks with camouflage adversarial textures. The experimental results showcase the generalizability of the trained adversarial NeRF across different poses, scenes, and 3D detectors. Additionally, the authors provide a defense mechanism against these attacks through adversarial training via data augmentation. In summary, the authors introduce Adv3D as a novel approach that models adversarial examples using NeRFs, resulting in more realistic and realizable attacks in the 3D domain. They demonstrate the effectiveness of their method through extensive evaluations and propose a defense strategy to mitigate the impact of these attacks. Strengths: 1. The writing and presentation of this paper are good and clear. 2. The idea of leveraging NeRF in generating adversarial examples is interesting 3. The authors conducted sufficient evaluation of the proposed method Weaknesses: 1. The practicality of the proposed attack is questionable 2. There is a lack of real-world experiments 3. The study seems to lack technical insights as the adversarial attacks are well established. There is no surprise that using some fancy new idea can lead to adversarial examples, but the essence is the same. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors has discussed the potential negative societal impact of this study, so it should be fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed feedback and for perceiving our methods as novel and effective. In the following, we reply to individual questions and comments raised by the reviewer: **(1,2) Real-world Experiments.** Thank you for pointing this out. We have conducted real-world experiments during the rebuttal phase. Please refer to the one-page PDF for more information. By printing the adversarial texture and adhering it to a vehicle model, the adversarial model successfully reduces the predicted confidence, demonstrating its practicality in real-world scenarios. **(3) Technical Insights.** We appreciate the feedback on our study. However, we respectfully disagree with the assessment that our work lacks technical insights. It is non-trivial to generate 3D adversarial examples using NeRF. We illustrate our contribution as follows: 1. Directly applying NeRF that modeling a whole scene as an adversarial example is impractical and difficult to realize in our real world. To provide a feasible attack, we propose primitive-aware sampling to enable 3D patch attacks that the adversarial NeRF only has a small modification to the original 3D environment. Furthermore, we introduce semantic-guided regularization that allows for a clear distinction between feasible and unfeasible areas. This enhances physical realizability by removing adversarial texture on infeasible areas, such as tires and wheels. In addition, our newly added real-world experiments also display satisfactory attack results (please see one-page PDF), proving the physical realizability and effectiveness of our method in practice. 2. To perform transferable attacks across poses and scenes, we formulate our learning objective as Expectation Over Transformation (EOT). The experimental results demonstrate that our method transfers well to different poses, unseen scenarios, and detectors in a non-contact manner. Additionally, we provide an adversarial defense method that not only improves robustness but also enhances clean data performance, demonstrating the effectiveness and benefits of our method. 3. We conduct extensive experiments to evaluate the robustness of different types of 3D detectors, including FoV and BEV, and provide a detailed analysis of each. This analysis may provide insightful implications for the development of more robust 3D detectors in the future. Specifically, in Section 5.3, we find that query-based detectors (DETR3D) are the most robust detectors, which provides valuable insights for building 3D detectors with enhanced robustness. We believe that the three aforementioned contributions collectively make a notable and insightful impact in the field. --- Rebuttal Comment 1.1: Title: Response Comment: I would thank the authors for the rebuttal, especially for their efforts in creating real-world examples. However, the examples actually increase the concerns from the reviewer on the practicality of the attack. I would have to say that as a human, the confidence of the vehicle with the adversarial sticker to be a real one will drop in my mind. If I simply make the sticker with the same texture and color style of the road, the confidence may be even lower from the detector. Regarding the insights, I still think that the formulation of an adversarial attack using NeRF is not necessarily better than other adversarial attack formulations. I did not challenge the formulation itself, but the need to leverage NeRF for adversarial attacks. As mentioned before, I strongly believe a pattern of sticker that mimic the background objects (tree or building) will also lower the confidence of the prediction. So I will maintain my current score of 4. --- Reply to Comment 1.1.1: Title: Thank you for the feedback Comment: Dear reviewer RUyN, We thank you for the comment. We appreciate the feedback. However, there may be a misunderstanding regarding the reviewer’s statement about how we attack the detector. Allow us first clarify our setting: our adversarial example aims to minimize the confidence of all surrounding objects (itself + others) in a non-contact manner. Simply making the sticker with the same texture and color style of the road will not lower the confidence of other untouched objects, as it would just blend the object into the background. Our adversarial examples, however, are effective in lowering the confidence of all surrounding objects, both in digital and real-world settings. Our adversarial attack pipeline is object-agnostic and can be adapted to any category of objects like trees or buildings. We chose vehicles as adversarial examples because they are the most common objects in driving scenarios. Evaluating and improving the robustness of 3D detectors based on vehicles can be the most effective approach. If there are any further concerns or questions, we would be happy to address them in further discussion. Thank you!
Summary: The authors proposed new generative adversarial examples in the form of NeRFs, in the context of driving scenarios. The training objective is minimizing the 3D detection confidence from a variety of views. The parameters to optimize are the latent input to the NeRF, that encodes shape and texture info. Rendering is naturally differentiable due to the usage of NeRF. To improve the physical realizability, they propose three methods: primitive-aware sampling, NeRF disentanglement, and semantic-guided regularization. The authors conducted experiments on the widely used nuScenes dataset to evaluate the performance drop. The results show that their method is able to reduce the detection performance of various detectors, whether they are FOV-based detectors or birdview-based ones. They also evaluated the transferability of their method, and the adversarial training defense method. Strengths: Using NeRF as 3D adversarial example representation seems novel and interesting. The NeRF representation naturally is differentiable in terms of rendering, so it makes the adversarial attack problem easier. Also, with more uses of NeRF in 3D vision, it is important to explore the vulnerability in NeRF itself. Such adversarial attacks may highlight the potential security issues in NeRF. The attacking framework (expectation over transformation), the NeRF rendering framework they use (Lift3D) are standard. The method is mostly built upon existing works; it seems not hard to implement their method. The writing is clear. Weaknesses: My major concern is that whether the formulation of NeRF is necessarily, from the motivation perspective. In line 175, they fixed the shape and only optimized the texture latent code. The optimization is essentially finding the color, density of the volume. However, I believe most vehicle objectives are not translucent; the optimized 3D object is very hard to realize. This is evident as authors need to improve the physical realizability (line 180). This leads to the Occam's razor principle: do we really need NeRFs to reach the effectiveness/realizability of the 3D attack? So we are missing a baseline here: optimizing the surface texture as a 3D mesh, using existing differentiable mesh renderers (such as Neural Mesh Renderer). The latter is easier to optimize (2D texture space), and more physically realizable (because it is a texture map rather than a volume). In line 166, the authors said "enables patch attacks in a 3D-aware manner by lifting the 2D patch to a 3D box", so we really need a baseline to showcase such lifting is necessary. Also it is not clear how rendering the NeRF into 3D scenes is done. In Fig. 3, the lighting of the NeRF object is not consistent with the environment, and we can see typical blurriness of NeRF. Another weakness is that the setting is not sophisticated enough to be "Driving Scenarios". At first glance, it looks like attacking self-driving algorithms, but the point clouds are not used (correct me if I am wrong). The detection methods (FCOS3D, PGD-Det etc) are based on monocular/multi-view 2D images instead of multi-sensor. In Fig. 3, the inserted adversarial example does not seem to block the LiDAR rays. The experiment is not done through a full driving simulation software, but by rendering 3D objects into existing 3D data. Whether such mixed environment can represent the real-world driving scenario is not clear. It'll be better to claim general 3D detection scenario and do more experiments with other objects, instead of only claiming driving-specific scenarios. In general, my decision largely depends on the first point: the NeRF representation may not be necessary under the current settings. Optimizing the texture image should just work; such volume formulation will make it harder to physically realize and does not bring much benefit other than differentiable rendering. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the method to a naive baseline, such as direct optimization of the texture of the car? Is it possible to do any simple ablation experiments? 2. Is it possible to reconstruct the NeRF, 3D-print it, then test in the real world? 3. How is the NeRF rendered into nuScenes? There are shadows under the NeRF renderings, but somehow the lighting is not consistent. Is it because the NeRF formulation omits shading (line 119)? Can you clarify? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed the limitations about dataset annotations and potential harmful consequences. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the detailed feedback and for appreciating the novelty of our idea and clear writing. In the following, we reply to individual questions and comments raised by the reviewer: **(1) Is it necessary to use NeRF as the representation of adversarial examples?** We acknowledge that other 3D representation like mesh has its own advantages as an adversarial example, such as having a clear surface definition. However, NeRF also has its distinguished advantages that deserve to explore for crafting adversarial examples: 1. **Latent representation.** The high-dimensional latent space of NeRF provides greater flexibility for downstream applications. In Section C of the supplementary material, we demonstrate that given the latent embedding of an adversarial texture, we can easily transfer the trained texture to other previously unseen vehicles while preserving fidelity. In contrast, transferring adversarial texture to unseen vehicles using mesh can be challenging. 2. **NeRF vs Mesh.** As suggested, we conduct a comparison of NeRF attack and mesh attack. We provide a simple experiment using a randomly picked ShapeNet car model as a mesh baseline. We use PyTorch3D’s differentiable renderer and optimized the vertex color as an adversarial example to attack 3D detectors. The attack performance of mAP and NDS is slightly lower than the NeRF counterpart. This may be attributed to the latent space of NeRF network has a higher dimensional representation than vertex color and providing much more solutions for attacking, which results in a better attack effect. This experiment also proves the advantage of using latent representation for adversarial examples. | Method | NDS | mAP | | ----------- | ----------- | ----------- | | Clean | 0.3822 | 0.3076 | | Mesh attack | 0.3018 | 0.2183 | | NeRF attack | 0.2648 | 0.1895 | 3. **Physical realizability of NeRF.** Our method defines the NeRF volume in SDF (Signed Distance Field) space, converging the volume to a surface area on the zero-level set, thus the volume has a meaningful definition. Our real-world experiments (please see one-page PDF) also demonstrate that the adversarial texture can be created in the real world and display satisfactory attack results. 4. **Realism.** In some cases, NeRF is more realistic than meshes (e.g., Lift3D vs ShapeNet model). This is because NeRF is created from real-world captured data, while meshes often require artists to manually adjust vertices, textures, and lighting, which sometimes suffer from domain gaps. Therefore, we hypothesize that **1.** Improving the robustness of detectors based on the mesh may not guarantee improvement in the real world. **2.** The adversarial textures trained using NeRF (especially under semantic-guided regularization) may generalize better to the real world. 5. **Broader impact.** The recent development of NeRF has led to remarkable progress in NeRF-based driving scene simulation [a,b,c]. Our adversarial framework is general and can be extended to integrate with the advances in NeRF-based simulators to benefit a wide spectrum of practical systems. For instance, our framework can be combined with UniSim [a] to perform adversarial closed-loop evaluations of self-driving cars in NeRF environments, or with ClimateNeRF [b] to identify adverse weather conditions that may corrupt the autonomous driving system. We believe that our work provides valuable insights and opens up new possibilities for creating authentic adversarial evaluations that improve the robustness of self-driving cars. **(2) Point cloud generation.** It does not take much effort to simultaneously generate images and LiDAR point clouds using NeRF as described in [a]. In our work, we chose to only evaluate the most common and challenging modality, which is images for brevity. We leave the multi-modality adversarial evaluation for future exploration. **(3) Claim issue: "Driving Scenarios".** Thanks for pointing this out. It would be more clear and accurate that use "3D object detection" than "Driving Scenarios". We will revise our claim accordingly. **(4) Question 1: Comparision.** In Tab. 3 of the main paper, we provide a comparison of full texture optimization and semantic-guided regularization. The attacked mAP is $0.132$ versus $0.148$. As expected, reducing the area of adversarial parts slightly decreases the attack performance. **(5) Question 2: Real-world experiments.** Yes. We have added real-world experiments during the rebuttal phase. Please refer to the one-page PDF for more information. By printing the adversarial texture on A4 paper and adhering it to a vehicle model, the adversarial model successfully reduces the predicted confidence, demonstrating its practicality in real-world scenarios. **(6) Question 3: Rendering issue.** Our NeRF follows the standard paradigm that omits shading. We find it sufficient to cast shadows from a pre-computed shadow map. Although lighting estimation is vital for physically-based rendering, it may be out of the scope of this paper. Future work can leverage [d] to perform accurate shadow casting. **References** [a] UniSim: A Neural Closed-Loop Sensor Simulator, CVPR 2023 [b] ClimateNeRF: Physically-based Neural Rendering for Extreme Climate Synthesis, ICCV 2023 [c] Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field, CVPR 2023 [d] Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes, CVPR 2023
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their insightful reviews. Before addressing the specific questions in the individual replies, we would like to first reiterate our motivation and contribution, and then provide a detailed description of the experiments that we have added during the rebuttal phase. **(1) Motivation and Contribution** Given the safety-critical demand for self-driving cars, it is critical to gain a deeper understanding of the robustness of 3D detectors in driving scenarios using 3D adversarial examples like NeRF. However, it is non-trivial to apply NeRF in adversarial attacks. We illustrate our contribution as follows: 1. Directly applying NeRF that modeling a whole scene as an adversarial example is impractical and difficult to realize in our real world. To provide a feasible attack, we propose primitive-aware sampling to enable 3D patch attacks that the adversarial NeRF only has a small modification to the original 3D environment. Furthermore, we introduce semantic-guided regularization that allows for a clear distinction between feasible and unfeasible areas. This enhances physical realizability by removing adversarial texture on infeasible areas, such as tires and wheels. In addition, our newly added real-world experiments also display satisfactory attack results (please see one-page PDF), proving the physical realizability and effectiveness of our method in practice. 2. To perform transferable attacks across poses and scenes, we formulate our learning objective as Expectation Over Transformation (EOT). The experimental results demonstrate that our method transfers well to different poses, unseen scenarios, and detectors in a non-contact manner. Additionally, we provide an adversarial defense method that not only improves robustness but also enhances clean data performance, demonstrating the effectiveness and benefits of our method. 3. We conduct extensive experiments to evaluate the robustness of different types of 3D detectors, including FoV and BEV, and provide a detailed analysis of each. This analysis may provide insightful implications for the development of more robust 3D detectors in the future. Specifically, in Section 5.3, we find that query-based detectors (DETR3D) are the most robust detectors, which provides valuable insights for building 3D detectors with enhanced robustness. **(2) Broader Impact** The recent development of NeRF has led to remarkable progress in NeRF-based driving scene simulation [a,b,c]. Our adversarial framework is general and can be extended to integrate with the advances in NeRF-based simulators to benefit a wide spectrum of practical systems. For instance, our framework can be combined with UniSim [a] to perform adversarial closed-loop evaluations of self-driving cars in NeRF environments, or with ClimateNeRF [b] to identify adverse weather conditions that may corrupt the autonomous driving system. We believe that our work provides valuable insights and opens up new possibilities for creating authentic adversarial evaluations that improve the robustness of self-driving cars. **(3) Additional Experiments** 1. **Real-World Experiments.** To validate the practicality of our adversarial example, we conduct experiments using scaled models (1:24) of real-world vehicles (see one-page PDF). We approximate adversarial textures using the rendering of the orthogonal views of examples (Future work can leverage [d] to extract the exact texture of NeRF). Next, we print the adversarial texture on A4 paper and tailor it to fit our vehicle model. Our experiments show that the adversarial texture is successful in reducing the confidence of both itself and surrounding objects, proving the practicality of our adversarial example. 2. **Mesh Comparison.** As suggested by reviewer Xm3G and s7Sz, we provide a simple experiment using a randomly picked ShapeNet car model as a mesh baseline. We use PyTorch3D’s differentiable renderer and optimized the vertex color as an adversarial example to attack 3D detectors (BEVDet). Align with the setting of the NeRF counterpart, we randomly rendered the mesh model and pasted the patch onto the original images. The attack performance of mAP and NDS is slightly lower than the NeRF counterpart. This may be attributed to the latent space of NeRF network has a higher dimensional representation than vertex color and providing much more solutions for attacking, which results in a better attack effect. | Method | NDS | mAP | | ----------- | ----------- | ----------- | | Clean | 0.3822 | 0.3076 | | Mesh attack | 0.3018 | 0.2183 | | NeRF attack | 0.2648 | 0.1895 | **References** [a] UniSim: A Neural Closed-Loop Sensor Simulator, CVPR 2023 [b] ClimateNeRF: Physically-based Neural Rendering for Extreme Climate Synthesis, ICCV 2023 [c] Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field, CVPR 2023 [d] Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement, ICCV 2023 Pdf: /pdf/e4bffbf473e06b0ddf6373ea8aa2e9d4fb2edeae.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes to generate 3D adversarial examples for attacking 3D object detectors in driving scenarios using NeRF. In particular, it integrates a series of techniques, including primitive-aware sampling and semantic-guided regularization, to ensure the physical realism and realizability of the generated adversarial examples. Extensive experiments have validated the effectiveness of the proposed method in reducing detection performance and serving as data augmentation. Strengths: 1. As an early attempt of generating 3D adversarial examples using NeRF, this work could offer a new perspective for the community in understanding and tackling real-world 3D adversarial attacks. 2. The extensive experiments validate the superiority of NeRF as a 3D adversarial attack generator. In particular, it is interesting to see that the generated adversarial examples can serve as a data augmentation to improve clean performance, which aligns with the previous observations in classification. Weaknesses: 1. My major concern is the assumed attacking setting of this work, i.e., how to leverage the proposed method in real-world driving scenarios. If only a static adversarial example is attached to the scene, generating other static objects on the road may be more practical than generating a vehicle; Otherwise, the authors are expected to show a video under an egocentric view to demonstrate the attack effectiveness, i.e., whether the dynamically moving adversarial vehicles can consistently mislead the 3D detectors from different view directions. 2. The claim "the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs)" in the abstract may not be accurate. ViewFool [1] also models adversarial examples using NeRF although only the view direction is adversarially optimized. It will be more accurate if the authors highlight this work as the first 3D adversarial example generator using NeRF. 3. Missing references regarding the early attempts of marrying NeRF and adversarial attacks (which are mostly orthogonal with this work): [1] "ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints", Y. Dong et al., NeurIPS'22. [2] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations", Y. Fu et al., ICML'23. [3] "Aug-NeRF: Training Stronger Neural Radiance Fields With Triple-Level Physically-Grounded Augmentations", T. Chen et al., CVPR'22. 4. Minor issue: There exists some inconsistency in terms of tense and punctuation, which could be improved in the final version. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. In my understanding, the primitive-aware sampling essentially performs a coordinate transformation between the world coordinate of the driving scenario and the canonical space defined by Lift3D. Will this transformation stretch the shape of the generated adversarial vehicles? 2. How could you generate adversarial vehicles in different styles if the mapping between the optimized texture code and the vehicle texture is deterministic? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Although the developed adversarial attacks may cause security concerns, this work intended to gain a deeper understanding of 3D adversarial examples and improve the achievable robustness on them, thus not suffering from negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive and detailed feedback. Below, we reply to individual comments and questions raised by the reviewer: **(1) Real-world experiments.** It is practicable to produce an adversarial NeRF in the real world by printing an adversarial texture. Our added experiments in the real world (see one-page PDF) show that by rendering the orthogonal view of the texture and simply pasting it to vehicle models, we achieve reasonable results to attack 3D detectors. A more advanced method can leverage recent work [a] to extract the underlying texture of NeRF. **Sequential results.** Providing sequential results such as video is valuable feedback. To evaluate the effects of the movement of objects, we present experiments in Fig. 4 of the main paper, which demonstrates the effectiveness of our attack across different locations and rotations. **(2, 3) Claim issue and references.** Thank you for pointing this out. The NeurIPS 2022 paper "ViewFool" leverages the differentiability of NeRF to find adversarial view directions. This work is related to ours, and we will add a discussion about it and revise our claim accordingly. "Aug-NeRF" investigates the robustness of NeRF reconstruction itself but does not model adversarial examples as NeRF. "NeRFool" does a similar thing to "Aug-NeRF" and is available online **after** the NeurIPS 2023 deadline. Compared with these works, our method uses NeRF as an adversarial example generator. We thoroughly evaluate the robustness of 3D detectors by leveraging NeRF's photorealistic synthesis and differentiability, and provide insightful analysis to develop more robust 3D detectors. We will cite them and add a comparison in our related work. **(4) Improvement of writing.** Thanks for pointing out the tense and punctuation issues. We will revise our paper further. **(5) Q1: Whether stretching.** Yes. In our implementation, we randomly stretch the length, width, and height of the adversarial vehicles to enhance transferability. Detailed hyperparameters can be found in Section A of our supplementary material. **(6) Q2: Transfer to unseen vehicle.** The optimized texture code and the vehicle texture are not tightly coupled. We can easily transfer the adversarial texture to unseen shape of vehicle (please see the visualization in Section F of the supplement material). **References** [a] Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement, ICCV 2023 --- Rebuttal 2: Title: Reviewer Response Comment: Thank the authors for the great efforts made by the rebuttal and the real-world experiments look interesting. However, the attack effectiveness under a real-world setting may be limited (which can be caused by many non-technical reasons in the uncertain real world) and it is currently hard to say whether the perturbation is an adversarial attack or just a strong noise. As such, I will keep my original score for now and I am willing to discuss it with other reviewers to further adjust my scores. --- Rebuttal Comment 2.1: Title: Thank you for the feedback! Comment: Dear reviewer R8Y1, Thank you for providing valuable comments. We agree that the attack effectiveness under real-world setting may not be fully exploited due to limited time and resources during the rebuttal phase. The inherent reasons that reduce our effectiveness in the real world can be the domain gap between the trained environment and our real-world environment, different camera parameters, lighting issue, and so on. We want to emphasize that our major goal and contribution is leveraging adversarial NeRF to better understand the robustness of 3D detectors in driving scenarios rather than crafting a real-world attacker. These understandings also contribute to our proposed training techniques (e.g., defense by data augmentation) to improve the clean performance and robustness of detectors, which is not necessary with a real-world attacker, as agreed by reviewer s7Sz. We believe that the real-world experiment is a bonus and is orthogonal to our core contribution. **Adversarial NeRF *vs* Strong Noise.** To further address the reviewer's concern, we conduct additional experiments comparing adversarial NeRF with strong noise in real-world settings. In these experiments, we replace the adversarial texture area with various types of noise, including pure black, the mean color of the background image, and random noise. In the table below, we observe that adversarial NeRF achieves the lowest predicted confidence and outperforms the other three types of texture in terms of attack effectiveness for hiding surrounding objects. | Texture | Clean | Pure Black | Mean Color | Random Noise | Adversarial NeRF | | :-: | :-: | :-: | :-: | :-: | :-: | | Confidence | 0.672 | 0.632 | 0.634 | 0.644 | **0.625** | We thank the reviewer for providing the feedback! If the reviewer has any further questions or suggestions, we are more than happy to take them.
null
null
null
null
null
null
Improving *day-ahead* Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context
Accept (poster)
Summary: The work presents a multi-modal model, called CrossViViT, to perform day-ahead solar global horizontal irradiance predictions. In that, the model combines spatial information from satellites, i.e. RGB, IR and vapor channels, across Europe with time series information from six point-like stations, i.e. clear sky, pressure, direct normal irradiance, diffuse horizontal irradiance as well as a derived proxy global horizontal irradiance based on the Ineichen model. CrossViViT's performance has been compared again various other statistical and numerical models (Persistence and FFT) as well as other state-of-the-art deep learning approaches based on the transformer building blocks. The performance, measured in RMSE or MAE, Strengths: - Application case from natural sciences incl. challenging real-world problems of multi-modality and missing data - Distinction between 'easy' and 'hard' prediction cases; this is commonly overlooked - Open discussion of strength and limitations of CrossViViT in contrast to other method, in particular: * Showcasing that it does not win across the board * Improving the interesting 'hard' cases for domain applications Weaknesses: - The reviewer thinks that the evaluation in the manuscript could generally be improved with findings from other papers as follows: * Normalize that forecasting values into a stated range (e.g. [0-1]) or state the value ranges otherwise an RMSE/MAE improvement by certain value cannot be put into a frame of reference * Alternatively, consider reporting MAPE, mean average percentage, improvements instead that already includes that maximum range * By extension the plots in Fig. 4 can be considered somewhat misleading as they do not show the minimum value 0 or give a magnitude of improvement - The authors dismiss the use of fourier-layers, e.g. AFNO, without clear substantiation - Since cross-attention is such an integral part for mixing tokens from the spatial and temporal domains, it would be meaningful to show the equation in your manuscript beyond the reference - It would be meaningful to extend the discussion of the findings towards the domain and/or real-world. What does it mean that you can achieve - Non-adherence to the conference paper template, fonts in tables are too small, figures are outside of the text margins - Color palette is difficult to read for color-blind people (red-green) - The study is not reproducible as there is no code given; there is no indication that it will be released in case of acceptance - Might be good to show the rough location, possibly with an arrow, for the TAM station on the map in Fig. 2 - To the reviewer's personal taste: change the title from a question to a typical title like "Solar Irradiance Time Series Forecasting with Spatio-Temporal Context" Technical Quality: 3 good Clarity: 2 fair Questions for Authors: No direct question but an invitation to provide additional details or comments to the points raised in 'Weaknesses' Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Normalize that forecasting values into a stated range (e.g. [0-1]) or state the value ranges otherwise an RMSE/MAE improvement by certain value cannot be put into a frame of reference We thank the reviewer for the suggestion. We think, however, that given that all the models are compared on the same setups and the same unobserved years for each station, the improvement of one model should be interpretable with respect to the others. However, if the reviewer means that normalizing would be helpful in order to compare the results across stations as well, we agree that it is a valid point. Yet, since the different stations have different patterns, we believe it is better to compare models for each station and not compare between stations, so that we keep in mind the order of magnitude of the values which can differ greatly from one station to another. > Alternatively, consider reporting MAPE, mean average percentage, improvements instead that already includes that maximum range Indeed, MAPE could be a good choice, but unfortunately it is problematic in our cases given that we are also considering nights through our sliding window training process, which involves ground truth values of 0, which cannot be handled by the MAPE by definition. > By extension the plots in Fig. 4 can be considered somewhat misleading as they do not show the minimum value 0 or give a magnitude of improvement The goal of the radar plots on Fig 4 is to compare the three models appearing on them, regarding each of the metrics we present. This, in our opinion, is easily doable using the plots. What would be the advantage of showing the minimum value 0? > The authors dismiss the use of fourier-layers, e.g. AFNO, without clear substantiation Does the reviewer mean mentioning fourier layers in related works, or using them within the architecture of CrossViViT? If it is the former, we did mention FNO, AFNO and GeoFNO in related works, as they are indeed very important works regarding weather prediction, PDE solving and video prediction in general. However, we feel that we wanted to build an architecture that only uses “basic” building blocks (including transformers, given the revolution they brought in our field) rather than already advanced architectures. There is of course a possibility to integrate many different existing methods to possibly improve the results, but we believe it is better to start without them at first. Furthermore, since we are not doing directly video prediction (although it would have been possible to predict the context as an auxiliary target, but we chose not to at first), we do not think fourier layers were necessarily first in line among all recent methods to improve our forecasting performance. > Since cross-attention is such an integral part for mixing tokens from the spatial and temporal domains, it would be meaningful to show the equation in your manuscript beyond the reference We thank the reviewer for this comment, we added the equation to the manuscript. > It would be meaningful to extend the discussion of the findings towards the domain and/or real-world. What does it mean that you can achieve We apply CrossViViT to a real-world problem of forecasting solar-irradiance which can be useful for mitigating climate change by encouraging the use of solar energy. We also mentioned in the introduction that CrossViViT can in principle be used to forecast any other physical variables. > Non-adherence to the conference paper template, fonts in tables are too small, figures are outside of the text margins We modified the paper to fit everything within the text margins, except for Figure 3 showing prediction visualizations, which would unfortunately be unreadable if keep it within the margins; if it is a problem, we can put some of the visualizations in the supplementary material. > Color palette is difficult to read for color-blind people (red-green) When making the plots, we tried to make sure that they were color-blind friendly as one of our co-authors is color-blind as well (and they made the plots). Can you give suggestions of colors we can use instead? > The study is not reproducible as there is no code given; there is no indication that it will be released in case of acceptance You can find the repository here: https://anonymous.4open.science/r/CrossViVit-57C2 > Might be good to show the rough location, possibly with an arrow, for the TAM station on the map in Fig. 2 Figure 2 was changed accordingly. > To the reviewer's personal taste: change the title from a question to a typical title like "Solar Irradiance Time Series Forecasting with Spatio-Temporal Context" The title was changed accordingly. --- Rebuttal Comment 1.1: Comment: > We thank the reviewer for the suggestion. We think, however, that given that all the models are compared on the same setups and the same unobserved years for each station, the improvement of one model should be interpretable with respect to the others. However, if the reviewer means that normalizing would be helpful in order to compare the results across stations as well, we agree that it is a valid point. Yet, since the different stations have different patterns, we believe it is better to compare models for each station and not compare between stations, so that we keep in mind the order of magnitude of the values which can differ greatly from one station to another. > Indeed, MAPE could be a good choice, but unfortunately it is problematic in our cases given that we are also considering nights through our sliding window training process, which involves ground truth values of 0, which cannot be handled by the MAPE by definition. It seems that reviewer and the authors have a different view on reporting numbers in regression tasks. For the reviewer, it is more meaningful to report a relative performance/improvement of a predictor rather than the absolute scale. This enables better judgment for individual stations, but also across the stations. This ties into the desire for a normalization/standardization of the scales as well as the the request to report the MAPE. While the later is challenging for small values, one can use typical numerical tricks like dividing by a very small epsilon. > The goal of the radar plots on Fig 4 is to compare the three models appearing on them, regarding each of the metrics we present. This, in our opinion, is easily doable using the plots. What would be the advantage of showing the minimum value 0? Not including a zero value for a bounded range allows to visually overly enhance improvements. > However, we feel that we wanted to build an architecture that only uses “basic” building blocks [...]. Furthermore, since we are not doing directly video prediction (although it would have been possible to predict the context as an auxiliary target, but we chose not to at first), we do not think fourier layers were necessarily first in line among all recent methods to improve our forecasting performance. The reviewer acknowledged the reasoning of the author for making a more focused study on basic building blocks. Nevertheless, Fourier layers have shown better predictive performance compared to standard blocks for networks such as ForecastNet. The authors may consider looking at them in the future. > We apply CrossViViT to a real-world problem of forecasting solar-irradiance which can be useful for mitigating climate change by encouraging the use of solar energy. We also mentioned in the introduction that CrossViViT can in principle be used to forecast any other physical variables. The reviewer would like to point out that the authors have only demonstrated a model with better predictive performance. Yet, what would it take to get this model in production? Can the possible effects for the climate be quantified? Is there a cost-use breakeven point? It would be meaningful to at least roughly reason about these or related aspects. > When making the plots, we tried to make sure that they were color-blind friendly as one of our co-authors is color-blind as well (and they made the plots). Can you give suggestions of colors we can use instead? There are several dedicated color palettes, e.g. to be found here: https://www.nceas.ucsb.edu/sites/default/files/2022-06/Colorblind%20Safe%20Color%20Schemes.pdf. To the personal taste of the reviewer Tol Muted works well, but others will probably also do. All other points: thanks you for incorporating them in the manuscript. --- Reply to Comment 1.1.1: Comment: >Indeed, MAPE could be a good choice, but unfortunately it is problematic in our cases given that we are also considering nights through our sliding window training process, which involves ground truth values of 0, which cannot be handled by the MAPE by definition. It seems that reviewer and the authors have a different view on reporting numbers in regression tasks. For the reviewer, it is more meaningful to report a relative performance/improvement of a predictor rather than the absolute scale. This enables better judgment for individual stations, but also across the stations. This ties into the desire for a normalization/standardization of the scales as well as the the request to report the MAPE. While the later is challenging for small values, one can use typical numerical tricks like dividing by a very small epsilon. We understand that a relative metric could be easier to showcase the improvement of a predictor. We therefore computed the MAPE for the predictions, using a small epsilon to replace 0 values for night time steps. Here, as a sample of the results, are the results for the day ahead prediction of our CrossViViT models and other models, on CAB test station and test years. As you can observe, CrossViViT models are still leading in performance. In the last version we will add the MAPE for all tests and cases. | Model | MAPE (207) | MAPE Easy (120) | MAPE Hard (87) | |--------------------------|:----------:|:---------------:|:--------------:| | Persistence | 0.54 | 0.34 | 0.82 | | Fourier_3 | 8.32 | 8.18 | 8.52 | | Fourier_4 | 5.18 | 5.44 | 4.82 | | Fourier_5 | 4 | 3.97 | 4.05 | | Clear Sky | 0.72 | 0.48 | 1.05 | | Reformer | 5.1 | 4.83 | 5.47 | | Informer | 6.7 | 5.71 | 8.06 | | FiLM | 7.43 | 7.8 | 6.91 | | PatchTST | 2.44 | 2.35 | 2.57 | | LightTS | 6.8 | 6.61 | 7.06 | | CrossFormer | 3.24 | 2.93 | 3.68 | | FEDFormer | 5.67 | 5.15 | 6.38 | | Dlinear | 14.08 | 12.05 | 16.87 | | AutoFormer | 13.04 | 12.63 | 13.6 | | CrossViViT | **0.45** | **0.31** | **0.64** | | CrossViViT (No RoPE) | 0.84 | 0.62 | 1.16 | | CrossViViT MultiQuantile | 0.62 | 0.4 | 0.93 | > Not including a zero value for a bounded range allows to visually overly enhance improvements. Following the last comment, We will also include MAPE in the radar plots in the last revision. >The reviewer acknowledged the reasoning of the author for making a more focused study on basic building blocks. Nevertheless, Fourier layers have shown better predictive performance compared to standard blocks for networks such as ForecastNet. The authors may consider looking at them in the future. We are definitely not underestimating the predictive power of Fourier layers, but solely thought they were not the most adapted for our first tentative. Yet, FourcastNet is indeed impressive and we’ll make sure to consider them for future efforts. >The reviewer would like to point out that the authors have only demonstrated a model with better predictive performance. Yet, what would it take to get this model in production? Can the possible effects for the climate be quantified? Is there a cost-use breakeven point? It would be meaningful to at least roughly reason about these or related aspects. We acknowledge the reviewer's concerns and we agree with the points made. Evaluating the usefulness of such models using metrics like MAE or RMSE certainly do not capture the nature of the forecasts, that's why we split the data into "Easy" cases and "Hard" cases and we agree that while it is a step in the right direction, it's not enough. The usefulness of such models can only be evaluated by looking at its downstream performance which in our cases would correspond to the amount of energy produced vs its carbon cost. This is a future work that we are envisioning by working on real stations and using our forecasts to guide the management of the energy grid. For the cost of the model, we will estimate its carbon footprint and add it to the camera-ready version. But it's important to keep in mind that such a model should in principle only be used once a day to produce the forecast and given the MACs that we reported, the potential savings in the generated energy's carbon footprint would outweigh that of CrossViViT. >There are several dedicated color palettes, e.g. to be found here: https://www.nceas.ucsb.edu/sites/default/files/2022-06/Colorblind%20Safe%20Color%20Schemes.pdf. To the personal taste of the reviewer Tol Muted works well, but others will probably also do. Following your recommendation, we will re-produce the results in the Tol Muted palette. >All other points: thanks you for incorporating them in the manuscript. Thanks!
Summary: The work presents a method to integrate information about cloud (using satellite images) with timeseries data related to Solar Irradiance to improve the solar irradiance forecasting. Strengths: Here some interesting aspects of the paper: - the release of a new dataset containing both timeseries and satellite images for many years for several sites. - an attempt to build a multimodal architecture based on transformer. - usually the subdivision between sunny days and cloudy days is done in the forecasting works related to solar irradiance (e.g. pv production). Authors have proposed to subdivide the days in "hard" and "easy" based on the similarity between two consecutives days. I think this approach is interesting and help the fair assessment of the model. Weaknesses: The main weakness I see is that day-ahead use case is not explicitely evaluated. I know that the sliding window is more general but an important real word use case is to have a real day-ahead prediction. Authors could test their algorithm on day-ahead use case (extracting properly the sliding windows of interest, i.e. from 0:00 to 23:00). Moreover, some details on the real input of the model seem missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Fig.2 --> The TAM is outside the area of interest. This is described in the text but, since the table in fig. reports TAM but it is not present in the first satellite image, the authors should insert a comment in the caption. Row 230: It is not true for TAM as wrote in row 266 Fig.3 and Fig.4 contain some discussions of the results but it would be better if these comments were moved in a proper section in the text. It could be interesting to consider a day-ahead situation, in other words considering the sliding window starting at 0:00 and ending at 23.00 of day-1 and compute the next day. In this way the most common day-ahead use case is tested. I think the satellite image are related to the same time of the time series. I didn't find this information in the text (or I missed it). Some details are not clear to me. 1. the context image is used as a separated channel? 2. all data from all stations are used for the training? 3. What are effectively the inputs of the model, a list and a detailed encoding would be preferrable. I gave a look at the appendix, some details are present other not. 4. what are the computational time required for the training of the proposed approach? An interesting future perspective could be to incorporate the forecasting of atmosphere's state (images obtained by a Weather Prediction service provider). Row 51 Sound prediction ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations have been identified and discussed by authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The main weakness I see is that day-ahead use case is not explicitely evaluated. I know that the sliding window is more general but an important real word use case is to have a real day-ahead prediction. Authors could test their algorithm on day-ahead use case (extracting properly the sliding windows of interest, i.e. from 0:00 to 23:00). Thank you for the suggestion, we did the test for day-ahead use case, practically selecting only the 00:00 to 23:00 windows when computing the metrics. The resulting table can be found in the rebuttal PDF. As the reviewer can see, it only slightly decreases the absolute performance of all models but comparatively, CrossViViT still outperforms the baselines. While we agree that this demonstrates the performance in an operational setting, we believe it would fit better in the appendix rather than the main text. > Moreover, some details on the real input of the model seem missing. We are not sure we understand what the reviewer means. What type of details are missing? >**Fig.2 --> The TAM is outside the area of interest. This is described in the text but, since the table in fig. reports TAM but it is not present in the first satellite image, the authors should insert a comment in the caption. We thank the reviewer for this suggestion and agree with the premise. To clarify the position of TAM (slightly) outside of the area of interest, we increased the area covered by Figure 2 top left figure, including TAM, but highlighting the area we are considering in red. We also inserted a comment in the caption.  > Row 230: It is not true for TAM as wrote in row 266 This was corrected.  > Fig.3 and Fig.4 contain some discussions of the results but it would be better if these comments were moved in a proper section in the text. Most comments are in the discussion part of the paper, yet we thought it would make for a better reading experience to also put some significant comments directly in the caption, with the figure of interest close by, rather than looking for the figure corresponding to the comment. It is in our opinion not uncommon to discuss results in captions.  > I think the satellite image are related to the same time of the time series. I didn't find this information in the text (or I missed it). Yes, the time series and the satellite images are aligned temporally. We clarified that in the text.  > the context image is used as a separated channel? What we call the spatial context is the ensemble of satellite data, which include multiple video channels, as described in the satellite data section. There are 11 channels in total.  > all data from all stations are used for the training? As mentioned in the paper: 3 stations (IZA, CNR, PAL) are used for training, 1 for validation (PAY) and the two remaining for testing (TAM and CAB). It allows us to evaluate the spatial generalization capabilities of the model.   > What are effectively the inputs of the model, a list and a detailed encoding would be preferrable. I gave a look at the appendix, some details are present other not. We presented all inputs in the method section and the type of data in the satellite data and time series section. Other details are indeed in the appendix. If you think more details are missing, we will be happy to add them in the appendix. > what are the computational time required for the training of the proposed approach? The table below highlights all the inference and training metrics for all models (Note that all the times presented there are for **one GPU**): |Model|Mean Latency (ms)|STDev. Latency (ms)|Giga MACs|Training time per epoch (s)| |---|---|---|---|---| |Reformer|5.43|0.32|1.66|387| |Informer|8.63|0.30|2.54|623| |FiLM|9.5|0.25||445| |PatchTST|2.69|0.13|0.57|106| |LightTS|0.82|0.087|0.0004|370| |CrossFormer|17.89|0.43|7.05|1300| |FEDFormer|66.41|0.63|1.03|1134| |DLinear|0.29|0.007|0.00004|365| |AutoFormer|32.03|0.54|2.92|1500| |CrossViViT|65.03|0.43|180.47|18000| |CrossViViT MultiQuantile|50.48|0.24|100.45|18000| *Latency: The time it takes for the model to process one instance (batch size=1)* *MAC: Number of multiply-accumulate operations. A multiply-accumulate operation corresponds to the operation a+(b*c) which counts as one operation. We don't report MACs for FiLM since it's based on S4 model and the library we used doesn't support it. > An interesting future perspective could be to incorporate the forecasting of atmosphere's state (images obtained by a Weather Prediction service provider). It is indeed in our plan for future work! Thanks for the suggestion.  > Row 51 Sound prediction ? By sound prediction we meant correct prediction; We replaced it in the text to avoid any confusion. --- Rebuttal Comment 1.1: Comment: > As the reviewer can see, it only slightly decreases the absolute performance of all models but comparatively, CrossViViT still outperforms the baselines. While we agree that this demonstrates the performance in an operational setting, we believe it would fit better in the appendix rather than the main text. Ok > We are not sure we understand what the reviewer means. What type of details are missing? Some details that I have asked for in the other questions. > We presented all inputs in the method section and the type of data in the satellite data and time series section. Other details are indeed in the appendix. If you think more details are missing, we will be happy to add them in the appendix. I think the appendix contains a good amount of information. >The table below highlights all the inference and training metrics for all models (Note that all the times presented there are for one GPU) Ok The authors have answered my questions and solved my doubts. I increased my rating accordingly. --- Reply to Comment 1.1.1: Title: Follow up answer Comment: Thank you for your feedback. Following your questions as well as those from all other reviewers, we added many clarifications and details to the manuscript, hopefully making the last version of the paper as clear as possible.
Summary: This submission presents a multimodal model for next-day solar irradiance prediction. They use time series of past irradiance and satellite image to predict irradiance 24h in advance. The model consists of one transformer branch for each modality and a shared (cross-modal) transformer. Their method can be used to predict uncertainty as well. They display improvements over the SOTA. Finally, they released in open access a dataset acquired with 6 stations across 15 years. Strengths: - The problem is interesting, difficult, useful, and not a lot of work has been done with machine learning on the subject - The model architecture is reasonable - the authors compare their method to many baselines and competing methods - The authors provide a large-scale (at least temporally) dataset in open access Weaknesses: - Some details are missing, making it hard to understand how the method works precisely. - The uncertainty prediction with quantile lacks a proper evaluation, related work, and comparison baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) the cross-attention module needs to better explained. There are TxNp video tokens and T temporal tokens; how are they mixed? The authors use "learned positional encoding" for these tokens, but it is not explained how. Is the absolute spatiotemporal position encoded? The different sampling rates of eumetsat (5min) and bsrn (1h), yet they have the same number of observations T? Q2) What is the influence of ROPE compared to a more standard positional encoding? Since the authors make it an integral part of their method, its effect should be quantified in an ablation study Q3) we allow the model to mask a portion of the past time-series -> Why would the model want to do that? That can only decrease the train performance S1) The title shouldn't be a question, especially such a hyper-precise and niche one! It should be: Improved Day-Ahead Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context or something in that vein Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: not provided Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and suggestions to improve the paper. We respond to the reviewer's comments below: > Some details are missing, making it hard to understand how the method works precisely. We are not sure we understand what the reviewer means. What type of details are missing? > The uncertainty prediction with quantile lacks a proper evaluation, related work, and comparison baselines. The Multi Quantile version serves as a supplementary version to offer a way to extract the uncertainty attached to the prediction, which can be implemented in most deep learning architectures, but is not the main contribution of the work, so we had to keep the results and comparison light in this regard. However, we do evaluate its median prediction (even though it is not meant as the definitive prediction) and its ability to include the observed true values. Do you have suggestions for more things we can include? > Q1) the cross-attention module needs to better explained. There are TxNp video tokens and T temporal tokens; how are they mixed? We first mix tokens spatially, that is we have no regard for time initially which is implemented by simply reshaping the time dimension into the batch size. This leaves for each time-step mixing $N$ context tokens with one time-series token which is done using cross-attention and results in one token. We believe this was clearly stated in the methods section and illustrated in Figure 1, but if not, let us know how we can make it clearer. > The authors use "learned positional encoding" for these tokens, but it is not explained how. Is the absolute spatiotemporal position encoded? The mixing is done first spatially by considering each time point separately. We first mix the tokens spatially using cross-attention through Rotary Positional Encoding (RoPE) and then the resulting tokens are concatenated in a temporal sequence which is processed by a transformer that adds a learned positional encoding. This is similar to how Video ViT works, as described in the appendix. > The different sampling rates of eumetsat (5min) and bsrn (1h), yet they have the same number of observations T? The original sampling rates of EUMETSAT is 5 min and BSRN is 1 minute, we down-sample them to 30min each, so that both context and time-series have a sampling rate of 30 minutes. > Q2) What is the influence of ROPE compared to a more standard positional encoding? Since the authors make it an integral part of their method, its effect should be quantified in an ablation study We thank the reviewer for this observation. We did the ablation and will include the results in the paper. The findings summarized in the table below (which extends Table 1) suggest that RoPE does help improve the performance. On CAB (2020-2022): | Model | MAE | RMSE | MAE Easy | RMSE Easy | MAE Hard | RMSE Hard | |-------------------------|-----------|-----------|-----------|-----------|-----------|------------| | CrossViViT | **50.35** | **99.18** | **47.04** | **89.60** | **55.30** | **112.00** | | CrossViViT without RoPE | 51.11 | 103.66 | **47.31** | 95.13 | 56.84 | 115.31 | On TAM (2017-2019): | Model | MAE | RMSE | MAE Easy | RMSE Easy | MAE Hard | RMSE Hard | |-------------------------|-----------|-----------|-----------|-----------|-----------|------------| | CrossViViT | **49.46** | **94.96** | **44.01** | **79.91** | 97.40 | **179.30** | | CrossViViT without RoPE | 109.28 | 196.44 | 111.33 | 197.63 | **91.29** | 185.61 | > Q3) we allow the model to mask a portion of the past time-series -> Why would the model want to do that? That can only decrease the train performance The idea behind the possibility of masking the past time series was to encourage the model to use the spatial context, rather than relying too much on the past time series, which is the first natural thing to do for the model. In particular, we wanted to prevent the model from simply repeating the past, making persistence-like predictions. Interestingly, we realized that even when masking entirely the past time series, the predictions were quite good, therefore showing that the model was indeed using the spatial context. In practice, however, the performance was still slightly better when keeping the time series masking to 0, so we kept it this way (while mentioning the possibility in the text and figures). > S1) The title shouldn't be a question, especially such a hyper-precise and niche one! It should be: Improved Day-Ahead Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context or something in that vein. We will change the title accordingly. > Limitations: not provided We did mention a few limitations! Yet, it was rather limited, so we increased this part a little since the review. --- Rebuttal Comment 1.1: Title: Follow up questions. Comment: > What type of details are missing? The ones that lead to the questions above. > The Multi Quantile version[...] is not the main contribution of the work, so we had to keep the results and comparison light in this regard. Multi Quantile is either a contribution, or it is not. Since you put it second in the list of contributions, it appears to be one. It would help if you placed this work with respect to the relevant literature and compared its performance with relevant approaches. If no existing work applies, explain why. > we do evaluate its median prediction [...] and its ability to include the observed true values. None of that measures the uncertainty estimation quality, the module's main goal. > Do you have suggestions for more things we can include? Here is a non-exhaustive list of relevant work. Explain the relation of your work with them, and look at how they evaluate the quality of the uncertainty estimation. Lakshminarayanan etal . Simple and scalable predictive uncertainty estimation using deep ensembles. Neurips, 2017. Turkoglu etal FiLM-ensemble: Probabilistic deep learning via featurewise linear modulation. Neurips, 2022 Gal etal Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICLR 2016 > CrossViViT without RoPE Reading the other rebuttal, I see that you replaced RoPE it with "a learned PE". Can you give details about this baseline? How do you learn the positional encoding, is it a function? Why not use a standard Fourrier-based encoding? The other explanation does clarify my understanding of token mixing. The authors should make sure to improve the clarity of the text to reflect these non-trivial details. I would be inclined to increase my rating if the RoPE experiment turned out to be valid, and if the authors can correctly situate and evaluate their uncertainty prediction module (or explain convincingly why they cannot). --- Reply to Comment 1.1.1: Title: Follow up answers Comment: > What type of details are missing? > The ones that lead to the questions above. We will add details regarding the tokens mixing, as described in the rebuttal, as well as precisions regarding RoPE and many other details that were asked by the other reviewers. We hope that will result in a final version with significantly improved clarity with respect to the original draft. > The Multi Quantile version[...] is not the main contribution of the work, so we had to keep the results and comparison light in this regard. > Multi Quantile is either a contribution, or it is not. Since you put it second in the list of contributions, it appears to be one. It would help if you placed this work with respect to the relevant literature and compared its performance with relevant approaches. If no existing work applies, explain why. We agree with the reviewer. We therefore added a literature review on uncertainty extraction in general, and related to regression and forecasting tasks in the manuscript (unfortunately due to space issues, and because we cannot update the rebuttal pdf, we cannot include it entirely but you will be able to observe it in the next revision) Only very few approaches actually exist to attach prediction intervals to regression estimates, as it will be mentioned in the additional literature review. Ensemble and bootstrap methods would really not be the most efficient here as our model is still a bit heavier than model typically used with such methods (regression trees etc), and the approach suggested by Metnet (Sønderby, Casper Kaae, et al. "Metnet: A neural weather model for precipitation forecasting." arXiv preprint arXiv:2003.12140 (2020).), meaning separating the prediction into multiple value bins, and predicting the probability of each of the bins for each time step, might work, but would be in our opinion less expressive, as one map head would predict multiple bins instead of having multiple specialized heads. It is in a way a generalization of the same concept. > We do evaluate its median prediction [...] and its ability to include the observed true values. > None of that measures the uncertainty estimation quality, the module's main goal. When estimating prediction intervals, it is common practice to evaluate the quality of the estimation by examining the fraction of test points that fall inside the corresponding prediction intervals, which is precisely what we do. Note that most works, including the ones you mention in your next comment, tackle the evaluation of uncertainty regarding classification tasks (meaning taking into account the uncertainty of probabilistic outputs when converting them into classes) and only a handful tackle its regression counterpart. Note also that the evaluation of such prediction intervals attached to regression estimates is surprisingly challenging and a topic of research on itself, as pointed out by Sluijterman et al, 2023 (« How to Evaluate Uncertainty Estimates in Machine Learning for Regression? »), in a very recent work. Sluijterman et al acknowledges that is one of the most common ways of doing so, yet advocates for simulation-based approaches. We will definitely explore these new ways of evaluation in the future, but for the time being for time concerns, we shall keep our evaluation scheme as it is if it is ok with the reviewer. > Do you have suggestions for more things we can include? >Here is a non-exhaustive list of relevant work[...] Thanks a lot for these suggestions. We included them in the new related works paragraph regarding uncertainty extraction, along with many other references closely related to our problem. > CrossViViT without RoPE > Reading the other rebuttal, I see that you replaced RoPE it with "a learned PE". Can you give details about this baseline? [...] I would be inclined to increase my rating if the RoPE experiment turned out to be valid, and if the authors can correctly situate and evaluate their uncertainty prediction module (or explain convincingly why they cannot). Indeed, RoPE is ablated against a standard learnable positional encoding as was done in the Video VIT and VIT papers. The learnable positional encoding is a learnable parameter $\mathbf{p}\in\mathbb{R}^{N\times d}$ where $N$ is the number of tokens in the sequence and $d$, the embedding dimension. We chose the learnable positional encoding scheme instead of the fourier-based one because the former is a standard choice and can be more felxible (if correctly learned) than a static embedding.
Summary: The paper proposes a transformer-based day-ahead forecasting model for solar irradiance at a ground station. The model ingests previous irradiances and contextual (image-sequence) information with a temporal and vision transformer. A cross-former merges the tokens, and a temporal transformer decoder estimates irradiances in a 24-hour window. An optional multi-quantile output head also allows the model to estimate uncertainty by forecasting quantiles. This multi-quantile loss produces predictions with slightly lower accuracy, which may be justifiable at the benefit of uncertainty quantification. Further considerations involve rotary positional encodings for the context image information, which is motivated by the images centered on the measurement station. Overall, the paper presents a combination of state-of-the-art methods (ViVIT, transformers) with problem-specific ideas (ROPE, Quantile regression) crafted towards a suitable application (irradiance forecasting). Strengths: * important application (irradiance forecast) addressed with state-of-the-art machine learning (temporal + vision) and architecture crafted towards the forecasting application. * uncertainty quantification with a loss function inspired by quantile regression (Koenker & Hallock, 2001) * separate evaluation in easy and hard cases. Comparison to reasonable comparison methods and baselines * evaluations between different stations to test out-of-domain generalization Weaknesses: * some design decisions are not justified or unclear (see questions: learned positional encoding). * fast/sloppy preparation of some parts manuscript: * errors/typos in domain-specific equations: GHI = DNI + DHI x cos(z) <- I believe the x cos(z) should be with the DNI (direct normal irradiance) and not the DHI to account for the sun angle. * equation 7: shouldnt it be \hat{y}_\alpha, as there is a prediction \hat{y} for each quantile \alpha? * style and references are not consistent and some are not retrievable (ArXiv.org vs ArXiv, abs/2010.08895. Or “Rothfuss, H. (2015); Data access at eumetsat.” has no meaning and can not be retrieved); is “348 [BSRN] BSRN. Baseline surface radiation network.” from A Driemel · 2018 <- Cited by 249, Baseline Surface Radiation Network (BSRN) ? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why did the authors use a learned positional encoding for the temporal transformer rather than a regular periodic function? * Why does Multi-Quantile CrossVivit have fewer parameters (almost half) than CrossVivit? Following my intuition, it should have more, as it has more MLP heads. Were the hyperparameters (i.e., number of layers, etc) different? * According to the DNI/DHI Equation: Can the authors verify this is just a typo not implemented in the data generation? * How applicable is the method compared to related work? In particular with respect to the runtime and inference time. Can the authors provide some numbers on the runtime compared to other approaches Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A limitations section is included in the conclusion. However, these limitations are rather phrased toward future work and, for instance, describe a lack of data that is hard to address. Other potential limitations like computational runtime, which is an important factor towards the applicability of this method, are not discussed. I feel this should be somewhat considered given that the proposed method is crafted for a particular application field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and suggestions to improve the paper. We respond to the reviewer's comments below: > errors/typos in domain-specific equations: GHI = DNI + DHI x cos(z) <- I believe the x cos(z) should be with the DNI (direct normal irradiance) and not the DHI to account for the sun angle. It was a typo indeed, we double-checked the data-generation process and it contained the right formula. > equation 7: shouldnt it be \hat{y}_\alpha, as there is a prediction \hat{y} for each quantile \alpha? There is indeed a different prediction for each quantile, so we changed the notation regarding the MultiQuantile loss, just under the equation 7 (yet not for equation 7 itself which simply defines a single quantile loss). > style and references are not consistent and some are not retrievable (ArXiv.org vs ArXiv, abs/2010.08895. Or “Rothfuss, H. (2015); Data access at eumetsat.” has no meaning and can not be retrieved); is “348 [BSRN] BSRN. Baseline surface radiation network.” from A Driemel · 2018 <- Cited by 249, Baseline Surface Radiation Network (BSRN) ? The three above points were corrected. > Why did the authors use a learned positional encoding for the temporal transformer rather than a regular periodic function? This is a design choice, motivated by the current literature on transformers. In particular, it was used in the architecture that partly gave us inspiration, the Video VIT (https://arxiv.org/pdf/2103.15691.pdf), as mentioned in the paper. It is therefore natural that we employ the same positional encoding here, as it appears to offer more expressivity and function the best. > Why does Multi-Quantile CrossVivit have fewer parameters (almost half) than CrossVivit? Following my intuition, it should have more, as it has more MLP heads. Were the hyperparameters (i.e., number of layers, etc) different? We tried to train the Multi quantile version with the same number of parameters, and the performance was simply a bit lower. The version shown in the paper, with different hyperparameters indeed (using smaller number of layers and smaller dimensionality as shown in the appendix), seemed to work better. Please note that in the case of this version, the goal is not to offer the best median prediction, but rather the best confidence interval, meaning with a highest probability of including the observed values, which was offered by the smaller version. For completeness however, we added the results of the larger model, with the same hyperparameters as the normal CrossViViT, in the tables. This version has 145.5M parameters (a little bit more than the normal CrossViViT, since, as pointed out, it has more MLP heads). > How applicable is the method compared to related work? In particular with respect to the runtime and inference time. Can the authors provide some numbers on the runtime compared to other approaches Below, we present a table with different inference and training metrics: | Model | Mean Latency (ms) | STDev. Latency (ms) | Giga MACs | Training time per epoch (s) | |--------------------------|-------------------|---------------------|-----------|-----------------------------| | Reformer | 5.43 | 0.32 | 1.66 | 387 | | Informer | 8.63 | 0.30 | 2.54 | 623 | | FiLM | 9.5 | 0.25 | | 445 | | PatchTST | 2.69 | 0.13 | 0.57 | 106 | | LightTS | 0.82 | 0.087 | 0.0004 | 370 | | CrossFormer | 17.89 | 0.43 | 7.05 | 1300 | | FEDFormer | 66.41 | 0.63 | 1.03 | 1134 | | DLinear | 0.29 | 0.007 | 0.00004 | 365 | | AutoFormer | 32.03 | 0.54 | 2.92 | 1500 | | CrossViViT | 65.03 | 0.43 | 180.47 | 18000 | | CrossViViT MultiQuantile | 50.48 | 0.24 | 100.45 | 18000 | *Latency: The time it takes for the model to process one instance (batch size=1)* *MAC: Number of multiply-accumulate operations. A multiply-accumulate operation corresponds to the operation a+(b*c) which counts as one operation. As it can be observed on the table, the lightest models are indeed very fast and theoretically much faster than Ours, CrossViViT. However, all models have an inference time that lies within 1 second, and in practice it therefore does not make such a difference. We would like to note within the context of a solar installation, day-ahead forecasting would be done once a day for each station, so as long as the inference time is not unreasonably high, which is the case here, it seems acceptable to sacrifice inference time for better performance, given that time-series models are less compute-hungry in comparison. Given the previous comments, we suggest not including it in the main text, and instead in the appendix, to avoid making the tables too heavy. > Limitations A small note on this was added to the conclusion. --- Rebuttal Comment 1.1: Title: Thank you and follow-up on limitation discussion Comment: Thank you for providing detailed answers to my questions. There are no major disagreements left, and I would be glad to see the paper accepted at the conference as it addresses an interesting problem field with state-of-the-art models and is technically solid. As remark on the limitations discussion: As the paper appears sound, a more detailed discussion of existing limitations would be appreciated and highly beneficial to guide future research. A lack of description of limitations in the initial paper was also mentioned by reviewer zf5t and adding "small note" in the conclusion (reply to my review) or increasing the "part a little since the review." (response to reviewer zf5t) is not satisfactory. While I appreciate the additional runtime results, they also show that the training time is > x30 longer, and the latency of transformer models is still 2x to 5x longer than classical models. This can be openly discussed as not a limitation of this work but rather of transformers in general. Hence, discussing this limitation openly would encourage future work to develop more efficient and still accurate models. This also touches on fairness questions with regard to access to computational resources. I trust that the authors integrate a thorough discussion on limitations in the final camera-ready version. --- Reply to Comment 1.1.1: Title: Follow up answers Comment: Thank you very much for your feedback and suggestions. We will make sure to improve the limitations section with a longer discussion, including a part on the training and inference times of transformers and our model in particular, and relating it to the current challenges regarding the access to compute power. It is indeed a very significant subject to tackle.
Rebuttal 1: Rebuttal: We thank the reviewers for their efforts and reviews of high quality. We are happy that they appreciated our contributions to the forecasting literature, including our architecture and the “easy and hard” cases evaluation. We carefully considered your suggestions, and believed that it ultimately made our paper better; we therefore thank you again for it. We answered each reviewer separately, point by point, using the discussion tool.  In a nutshell, these are the main points and additions made to the original manuscript: - Ablated the RoPE Positional Encoding (replacing it by a learned PE) to measure its impact on the architecture: On CAB (2020-2022): | Model | MAE | RMSE | MAE Easy | RMSE Easy | MAE Hard | RMSE Hard | |-------------------------|-----------|-----------|-----------|-----------|-----------|------------| | CrossViViT | **50.35** | **99.18** | **47.04** | **89.60** | **55.30** | **112.00** | | CrossViViT without RoPE | 51.11 | 103.66 | **47.31** | 95.13 | 56.84 | 115.31 | On TAM (2017-2019): | Model | MAE | RMSE | MAE Easy | RMSE Easy | MAE Hard | RMSE Hard | |-------------------------|-----------|-----------|-----------|-----------|-----------|------------| | CrossViViT | **49.46** | **94.96** | **44.01** | **79.91** | 97.40 | **179.30** | | CrossViViT without RoPE | 109.28 | 196.44 | 111.33 | 197.63 | **91.29** | 185.61 | - Added results for a larger Multi-Quantile version of the model, matching the size of CrossViViT On CAB (2020-2022): | Model | Parameters | MAE | $p_t$ | MAE Easy | $p_t$ | MAE Hard | $p_t$ | |-----------------------------------|------------|-------|------|----------|------|----------|------| | Multi-Quantile CrossViViT (small) | 78.8M | 61.8 | 0.91 | 57.03 | 0.93 | 68.94 | 0.9 | | Multi-Quantile CrossViViT (large) | 145.5M | 74.26 | 0.89 | 68.83 | 0.91 | 82.39 | 0.87 | On TAM (2017-2019): | Model | Parameters | MAE | $p_t$ | MAE Easy | $p_t$ | MAE Hard | $p_t$ | |-----------------------------------|------------|-------|------|----------|------|----------|------| | Multi-Quantile CrossViViT (small) | 78.8M | 81.2 | 0.71 | 78.93 | 0.70 | 101.18 | 0.75 | | Multi-Quantile CrossViViT (large) | 145.5M | 79.73 | 0.76 | 76.08 | 0.76 | 111.74 | 0.75 | - Added a table showing inference times of all models considered including our own, as well as training times  |Model|Mean Latency (ms)|STDev. Latency (ms)|Giga MACs|Training time per epoch (s)| |---|---|---|---|---| |Reformer|5.43|0.32|1.66|387| |Informer|8.63|0.30|2.54|623| |FiLM|9.5|0.25||445| |PatchTST|2.69|0.13|0.57|106| |LightTS|0.82|0.087|0.0004|370| |CrossFormer|17.89|0.43|7.05|1300| |FEDFormer|66.41|0.63|1.03|1134| |DLinear|0.29|0.007|0.00004|365| |AutoFormer|32.03|0.54|2.92|1500| |CrossViViT|65.03|0.43|180.47|18000| |CrossViViT MultiQuantile|50.48|0.24|100.45|18000| - Added CrossViViT results for day-ahead only cases. Due to the character limit, we refer the reviewers to the attached rebuttal PDF that contains results for CAB (2020-2022) and TAM (2017-2019). - Corrected the text, added some details and made a few things clearer about the method in general.  - Changed the name of the paper: *Improved Day-Ahead Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context*. - We added the anonymous repository to our code: https://anonymous.4open.science/r/CrossViVit-57C2/README.md Pdf: /pdf/fb3a620b88537922d9b9eb4e766df77aaaf78873.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Accept (poster)
Summary: **Summary** The paper studies finite-sum minimization ($\min_{x} f(x):= \frac{1}{n} \sum_{i= 1}^n f_i(x)$) in the distributed setting with a central node and $n-1$ non-central (client) nodes. The setup and the paper's assumptions are as follows. 1. Each function $f_i$ is held in the $i$-th client node, and the first node is designated the central node. The machines on the non-central nodes can communicate function values and gradients with each other as well as with the central node. 2. The different $f_i$'s are assumed to be related to each other via the notion of "second-order similarity", which essentially means that the Hessians of $f_i$ and $f$ evaluated at the same point do not differ from each other by too much in the operator norm. A benefit of this assumption is that different clients do not need to all send their Hessian information. 3. The $f_i$'s are all convex, $f$ is $\mu$-strongly convex, and the $f_i$'s all satisfy $\delta$-average-second-order-similarity as described in 2. The paper's results are three-fold: 1. A non-accelerated algorithm called SVRS, which attains a communication cost of $\widetilde{O}(n + \sqrt{n} \delta/\mu)$. This result improves upon the previous best result of Khaled and Jin when $\delta \geq \sqrt{n} \mu$. 2. An accelerated algorithm called AccSVRS, which attains a communication cost of $O((n+ n^{3/4} \sqrt{\delta/\mu})\log|\epsilon|)$, which improves upon the previous best accelerated rate by Khaled and Jin by a factor of $\log(L/\mu)$. Therefore, this new rate is "smoothness-free". 3. Lower bounds for 2. The paper's technique is based, broadly, on the use of Bregman-SVRG: instead of calculating exactly in each iteration, we calculate the approximate entities every time and periodically offset the error by exact calculations; additionally, through the use of a Bregman divergence term, the update rule ensures that the next iterate is not too far from the current one in some chosen distance metric. Strengths: **Strengths** I think the paper scores highly on the story-telling aspect: it reads quite well! I also think the paper studies an important problem (faster communication complexity for distributed optimization). Weaknesses: In results: 1. Line 64: The paper's result from SVRS beats that of SVRP by Khaled and Jin when $\delta \geq \sqrt{n} \mu$. As of now it's not clear to me what the scope of this assumption is: are there many cases where this inequality holds? It appears to me that Lines 124 and 198 allude to this but it's not entirely clear to me. Can the authors please elaborate? 2. Line 68 - 70: The paper's result with acceleration shaves a log factor from the previous best result as well as removes a component-wise strong convexity assumption from the previous best result. It would be nice to see some intuition for why the removal of log factor here is important and why the component-wise strong convexity is that much stronger than total strong convexity. In the writing: 1. In lines 14 - 17, there are no references for these stated applications. It would be much more convincing to have citations for each of the stated applications. 2. The problem parameter $\mu$ should be introduced separately, rather than as part of the related work in lines 40 - 48, since it's an important parameter that comes up repeatedly. 3. The inclusion of Lemma $3.1$ in the main body does not serve (in my opinion) much purpose because it's mathematically quite dense to parse Equation $(10)$. Theorem $3.2$ is ok since it clearly gives the claimed rate. Instead of Lemma $3.1$, I'd have preferred to have a proof sketch for Theorem $3.2$. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: **Questions** 1. In Algorithm 1, should line 4 be $g_t = \nabla f_{i_t} (w_t) - \nabla f(w_t)$ and should there be an update rule for $w_t$ somewhere? (That's what Line 176 seems to suggest.) If not, it would be great to have intuition for why the same $w_0$ is used through the entire algorithm. 2. Please see the Weaknesses section above for more questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your advice. We reply to your comments one by one below. 1. Reply to Weaknesses in Results 1: We are sorry for the confusing argument. Indeed, SVRS is always no worse than SVRP by Khaled and Jin since $O(n+\delta^2/\mu^2)=O(n)$ and $O(n+\sqrt{n}\delta/\mu) = O(n)$ when $\delta<\sqrt{n}\mu$. We emphasize the case of $\delta \geq \sqrt{n}\mu$ to show the benefit of our method in the ill-conditioning case that $\mu$ is very small. 2. Reply to Weaknesses in Results 2: Thank you for your suggestions. Not only do we remove the log factor, but also we give a directly accelerated method. As mentioned in the paper of KatyushaX (see reference [6, Section 1.2] in our paper), the Catalyst SVRP needs to run each SVRP until a very accurate point is obtained since the error propagates. Moreover, to optimize the complexity, one needs to terminate each call of SVRP at a different accuracy, while this may be difficult for a random algorithm. Finally, one needs to tune three parameters in a Catalyzed method. Thus, we wonder whether a directly accelerated method exists. The contribution should not only be limited to removing the log factor (though it is also important from the poor theoretical view) but also the directly accelerated method. As for the improvements from the component-wise strong convexity to the total strong convexity, much literature has mentioned the reason. Overall, when we remove the component-wise strong convexity, then each component in the finite-sum objective could even be non-convex! This greatly expands the suitable scene. The most famous example is the shift-and-invert approach to solving PCA [1,2] (or see Sec 1.1 Motivating Examples in the paper of KatyushaX), where each component is smooth and non-convex, but the average function is convex. Thus, we consider our improvement meaningful. 3. Reply to Weaknesses in writing: Thank you for the advice, we will add references and adjust the structure in the later version. 4. Reply to Question 1: this is not a typo. Note that the Algorithm is one epoch (1ep in short) version of our SVRS. Thus, in the one epoch of SVRS, the anchor point is fixed just like SVRG. Line 176 shows the multi-epoch SVRS, where we update the anchor point $w_t$ based on 1ep SVRS. [1] Youcef Saad. Numerical methods for large eigenvalue problems. Manchester University Press, 1992. [2] Garber, Dan, et al. Faster eigenvector computation via shift-and-invert preconditioning. International Conference on Machine Learning. PMLR, 2016. --- Rebuttal 2: Title: Looking Forward to Your Reply Comment: Dear Reviewer Y65R, We understand that the review process can be time-consuming and demanding. We would greatly appreciate it if you could let us know whether you agree with our reply. --- Rebuttal Comment 2.1: Title: Acknowledgement of response Comment: Dear authors, Thank you for your time and effort in a detailed rebuttal. I am keeping my score. Thanks!
Summary: The authors consider distributed minimization problems under data similarity (hessian similarity). The authors consider stochastic methods that reduce communication complexity via device sampling. In particular, from the stochastic point of view, the variance reduction techniques: SVRG and Katyusha, are taken. The sliding (stochastic preconditioning/mirror descent with unusual Bregman divergence) technique is used for dealing with similarity. The authors obtain record results in communication complexity (the previous ones were beaten by the logarithmic factor). To complete the picture, the authors provide lower bounds that give the optimality of their upper bounds. Synthetic experiments are also given. Strengths: 1) Direct acceleration (without envelops) is an interesting and important result. In theory it removes the extra logarithmic factor, and in practice it works better. 2) The lower bounds complete the picture. Weaknesses: 1) I think the literature review is not complete. In particular I find two papers also about the hessian similarity, which also use the variance reduction technique and obtain such non-accelerated results as the authors have. A detailed comparison in approaches and results is needed. Beznosikov, A., & Gasnikov, A. (2023). Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities. arXiv preprint arXiv:2302.07615. Beznosikov, A., & Gasnikov, A. (2022, September). Compression and data similarity: Combination of two techniques for communication-efficient solving of distributed variational inequalities. In International Conference on Optimization and Applications (pp. 151-162). Cham: Springer Nature Switzerland. 2) The lower bounds are a good supplement to the upper bounds, but they are to be expected. The idea of getting them is also known. Unfortunately, here the authors don not also give a complete summary of the literature: the problem with a matrix A is classical (the authors note it), but the partition of the problem into columns is also classical - see the papers: Zhang, M., Shu, Y., & He, K. (2020). Tight Lower Complexity Bounds for Strongly Convex Finite-Sum Optimization. arXiv preprint arXiv:2010.08766. Han, Y., Xie, G., & Zhang, Z. (2021). Lower complexity bounds of finite-sum optimization problems: The results and construction. arXiv preprint arXiv:2103.08280. Kovalev, D., Beznosikov, A., Sadiev, A., Persiianov, M., Richtárik, P., & Gasnikov, A. (2022). Optimal algorithms for decentralized stochastic variational inequalities. Advances in Neural Information Processing Systems, 35, 31073-31088. 3) Experiments for me are not the most important thing in this paper, which is primarily theoretical. But I would still like to see real datasets (3 for symmetry with synthetic data). I would also advise authors to change not \mu, but to change \delta in synthetic experiments, this way the effect of similarity will be noticeable. Summary: For me, this is a borderline paper. For now I put a (weak) rejection, but I hope the authors will take part in discussion and make the changes I asked for. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper is theoretical, therefore there is no need to discuss the social negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for pointing out these interesting and meaningful references. We are sorry for the incompleteness of the reference. We will add the reference you posted in a later version because revision is not allowed in the rebuttal period this year. In addition, we find there are still some differences compared to these works. 1. Reply to Weakness 1: Despite the similar assumptions and techniques with similar communication bounds, we find that our paper still has some differences compared to Beznosikov, A., & Gasnikov, A. (2022, 2023): - We do not assume the smoothness and convexity of the component or the objective but replace it with a more general proximal approximately solvable assumption (see Eq. (9) in our main paper), which could even cover some nonsmooth and non-convex but proximal trackable component functions. We consider such an assumption more essential since the local update step in the paper of Beznosikov, A., & Gasnikov, A. (2022, 2023) can be viewed as partially solving the proximal step. - Our algorithm (Alg. 1) is more concise with easy-choosing parameters. Particularly, the choice of hyper-parameters, such as learning rate, is totally smoothness-free. This can also be viewed as the benefit of our proximal solvable assumption. - More importantly, except for the diversity in the non-accelerated method, we also give a simple directly accelerated method with a better communication complexity bound. However, Beznosikov, A., & Gasnikov, A. (2022, 2023) provide really meaningful work by considering a more general setting for solving variational inequalities, which could cover our simple minimization setup and many common problems. Moreover, they also adopt the famous compression technique to further reduce communication complexity. Due to these challenges, the algorithms introduced by Beznosikov, A., & Gasnikov, A. (2022, 2023) are more complex their ours. 2. Reply to Weakness 2: Our way of partitioning the matrix $A$ is inspired by Han, Y., et al. (2021) (as we claimed in Section 4) and indeed closely related to Kovalev, D., et al. (2022), but different from Zhang, M., et al. (2020), where the authors duplicate the matrix $n$ times instead of partitioning it. Moreover, our settings are different from these papers. Zhang, M., et al. (2020) and Han, Y., et al. (2021) both focus on the gradient complexity, while our concern is the communication complexity. Kovalev, D., et al. (2022) study the communication complexity (as well as the gradient complexity) for smooth variational inequalities, while we aim to give a more refined analysis of communication complexity for minimization problems even without the smoothness assumption, though we find that constructing a smooth hard instance suffices to yield the desired lower bound. Despite the similarity in construction, our results and theirs are not directly comparable. 3. Reply to Weakness 3: Since the communication bound in all literature is related to $\delta/\mu$, so we only need to change one parameter and fix another to see the effect. Meanwhile, adjusting the strong-convex coefficient $\mu$ is rather easy, but the similarity coefficient would change the dataset entirely. Hence, we adopt an easy-tuning way to conduct experiments for horizontal comparison of the different condition numbers with the same prefixed dataset. --- Rebuttal 2: Title: Looking Forward to Your Reply Comment: Dear Reviewer ZtXX, We understand that the review process can be time-consuming and demanding. We would greatly appreciate it if you could let us know whether you agree with our reply. --- Rebuttal Comment 2.1: Comment: Thanks to the authors for the response! I still recommend considering points 1 and 2 and reflecting them in the paper to understand the paper's place in the literature. The paper remains borderline for me, I will raise my score a bit (in hopes that the authors will consider points 1 and 2). The overall impression of the paper remains the same. The results about the algorithms are interesting (but the idea of using variance reduction is not new), the upper estimates are record-breaking (but only on the logarithmic factor), the lower bounds are a nice addition (but repeat the idea and technique of lower estimates for the unallocated finite sum). --- Reply to Comment 2.1.1: Title: Further Response to Reviewer ZtXX Comment: Thank you for raising the score. We will definitely incorporate the points 1 and 2 into the revision. Here we would again emphasize the differences between these nice works and our work. 1. First, our work has large differences compared to Beznosikov \& Gasnikov (2022, 2023), despite the similar assumptions and techniques. * We do not assume the smoothness and convexity of the component or the objective but replace it with a more general proximal approximately solvable assumption, which could even cover some nonsmooth and non-convex but proximal trackable component functions. We consider such an assumption more essential because the local update step in Beznosikov \& Gasnikov (2022, 2023) can be viewed as partially solving the proximal step. * Our algorithm (Alg. 1) is more concise with easy-choosing parameters. Particularly, the choice of hyper-parameters, such as learning rate, is totally smoothness-free. This can also be viewed as the benefit of our proximal solvable assumption. * More importantly, except for the diversity in the non-accelerated method, we also give a simple directly accelerated method (Alg. 2) with an optimal communication complexity bound. * Beznosikov \& Gasnikov (2022, 2023) provide really meaningful work by considering a more general setting for solving variational inequalities. They also adopt the famous compression technique to further reduce communication complexity. Nevertheless, despite that their setup is more complicated, our results are also valuable and not directly comparable to theirs. Although minimization problems, on which we focus, are just a subclass of variational inequalities, they are extremely attractive due to their simple structure. And we indeed fill a gap in the literature. Moreover, since minimization problems enjoy much better properties than variational inequalities, the optimal communication complexity bounds for the two kinds of problems are generally different. 2. Second, with regard to the lower bound which makes our results complete, our way of partitioning the matrix is inspired by Han, Xie, and Zhang (2021) (as we claimed in Section 4) and indeed closely related to Kovalev, D., et al. (2022), but different from Zhang, Shu, and He (2020), where the authors duplicate the matrix $n$ times instead of partitioning it. Moreover, our settings are different from these papers. Zhang, Shu, and He (2020) and Han, Xie, and Zhang (2021) both focus on the gradient complexity, while our concern is the communication complexity. Kovalev, D., et al. (2022) study the communication complexity (as well as the gradient complexity) for smooth variational inequalities, while we aim to give a more refined analysis of communication complexity for minimization problems even without the smoothness assumption, though we find that constructing a smooth hard instance suffices to yield the desired lower bound. As we mentioned in the first point, despite the similarity in construction, the optimal communication complexity bounds for the two kinds of problems are not directly comparable.
Summary: The paper presents a novel algorithm for distributed optimization, named Accelerated Stochastic Variance-Reduced Sliding (ASVRS). The authors focus on the problem of minimizing the average of a large number of smooth and strongly convex functions, a common scenario in machine learning and data analysis. The proposed ASVRS algorithm combines the techniques of gradient-sliding and variance reduction, aiming to improve the convergence rate and reduce the communication cost in distributed settings. Strengths: The ASVRS algorithm is a novel contribution that combines gradient-sliding and variance reduction techniques in a unique way. This combination appears to be original and innovative. The authors provide detailed proofs for the convergence rate and communication complexity of the ASVRS algorithm, demonstrating its theoretical advantages over existing methods. I didn't check the details but the result seems reasonable. Weaknesses: The theoretical analysis relies on several assumptions, such as the strong convexity of the functions. This makes the application of the work rather limited. It would be helpful to discuss the implications of these assumptions and how the algorithm's performance might be affected if they are not met. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How sensitive is the ASVRS algorithm to its parameters? Could you provide some guidance on how to set the parameters in practice. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation has been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and advice. The assumptions in our paper include 1) the finite-sum objective is strongly convex, 2) the finite-sum objective satisfies average second-order similarity, and 3) the proximal operator of just one part (or each part) is approximately solvable. These assumptions also appear in previous work (e.g., references [29, 31] in our paper). Assumption 1) is a common assumption in convex optimization, which could be satisfied after adding regularization for the common convex loss function. Assumption 2) is the core assumption in our setup, which appears in a large body of work, and has the practical scene from statistical learning, as well as the intuition of similar data in each client. Assumption 3) is related to the proximal operator, which also has much literature, particularly on nonsmooth optimization. Assumption 1) guarantees the benign property of the finite-sum objective, which serves as the whole property; Assumption 2) shows the connection between each component in the objective, which serves as the property between parts; Finally, Assumption 3) shows the requirement of one component (or each component), which serves as the part property. The hyper-parameters in ASVRS include interpolation coefficients $\tau, \alpha$, learning rate $\theta$, and full gradient step probability $p$. ASVRS is an accelerated method. Hence, ASVRS may display the oscillation when the learning rate is large and the interpolation coefficients are improper. Our theorems already give a proper combination of these parameters. However, in practice, the parameter in theory may not be optimal. Here we list one common tuning method based on our experience in experiments. We can choose $p=1/n$ and fix the relationship between interpolation coefficients $\tau$ and $\alpha$ as in Theorem 3.5, and then fine-tune the learning rate $\theta$ and only one interpolation coefficient $\tau \in (0, 1)$. In detail, we may scale $\theta$ and $\tau$ based on its theoretical value. --- Rebuttal 2: Title: Looking Forward to Your Reply Comment: Dear Reviewer FHTz, We understand that the review process can be time-consuming and demanding. We would greatly appreciate it if you could let us know whether you agree with our reply.
Summary: The paper considers distributed strongly convex optimization problems in the setting where the communication between nodes is bottleneck. The authors propose new methods, SVRS and AccSVRS, that guarantee new communication complexities. Also, they proved the lower bound that ensures the optimality of the AccSVRS method. Strengths: I think that the paper is strong. The authors provide new theoretical guarantees in the considered setting. They improve the previous methods. I haven't checked the proofs in detail and I can miss some essential parts, but the theory sounds to me. Weaknesses: It is well known that the (Loopless-)Katyusha method converges after $n + \sqrt{n \frac{L}{\mu}}$ iterations. By applying Katyusha to the authors' problem, we can get the communication complexity $n + \sqrt{n \frac{L}{\mu}},$ since Katyusha requires only one gradient in each iteration. In the regime when $\delta = L$ the Katyusha method has better communication complexity than $n + n^{3/4} \sqrt{\frac{L}{\mu}}.$ Why doesn't it contradict the lower bound (Theorem G.7 and Theorem 4.4)? Does it mean that your method is only better in the regimes when $\delta \ll L$? Minor comments: The paper's setup can be slightly confusing. Many other papers (e.g., \[1,2\]) assume that the nodes can do calculations and send vectors *in parallel,* meaning that they count each round as *one* communication, *one round = one communication*. In comparison, this paper assumes that *one round = $n$ communications* in the full participation regime. Can the authors write a small text *in the paper* explaining the difference between the setups? It seems that we have at least two different setups that both have the right to life. Typos: Eq. (18): $\nabla$ is missed. \[1\]: https://arxiv.org/abs/2202.09357 \[2\]: https://arxiv.org/abs/2304.04169 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We consider there are some misleading due to the unclearness in our paper. **Our setting is unable to be compared with the classical smooth and strongly convex setting generally.** First of all, we reclarify the framework we studied: 1) the finite-sum objective is $\mu$-strongly convex, 2) the finite-sum objective satisfies $\delta$-average second-order similarity, and 3) the proximal operator of just one part (or each part) is approximately solvable (e.g., Eq. (9) in our paper). Under assumptions 1), 2), and 3), we improve the communication complexity compared to the previous work. Thus, *the gradient complexity is not the main contribution.* Indeed, the computation complexity (i.e., gradient complexity) is *not available* in our main theorems since we do not know the gradient complexity of getting an approximate solution of the proximal operator without further assumptions. **Our lower bound is also in the framework under 1), 2) and 3)**, but we enhance 3) from approximately solvable into entirely solvable, i.e., we can obtain the solution of the proximal operator of each part function. This is fine since we consider the lower bound there. In summary, the main results in our paper are built on 1), 2), and 3), and are only related to the communication complexity. Moreover, since the gradient complexity is also important in the machine learning and optimization community, we turn to consider a more common setup by additionally assuming 4) each part (or just one part) of the finite-sum objective is $L$-smooth in Sec 3.3, which appears in many previous works (e.g., the references [29, 31] in our paper). Now we turn to answer the reviewer's questions. 1. Katyusha admits optimal gradient complexity under the assumption 1) and 4), which differs from our setup. So the direct comparison is unsuitable. However, if we study the setup under 1), 2), and 4), both algorithms could apply since 2) and 4) could guarantee 3) in our setup. At this time, we need to recognize that *our current method (AccSVRS) is not optimal in the gradient complexity generally*, meaning that our method could be worse than Katyusha, but when the coefficient $\delta$ in assumption 2) lies in the proper range: $\delta = \Theta(\sqrt{\mu L})$, our AccSVRS could nearly recover the optimal gradient complexity under assumptions 1) and 4) and is comparable to Katyusha. Finally, as we reply to the first reviewer, the discussion in Sec 3.3 only confirms that our algorithm could recover the optimal computation complexity of the classical (average-)smooth and strongly convex case *under some relationship between smoothness and similarity constants*. The discussion in Section 3.3 is to show the minor benefit of gradient complexity under 4) in some restricted cases, instead of affirming that our method is optimal in gradient complexity, since under Assumptions 1), 2), 3), gradient complexity is even not available. 2. Now we explain why the communication complexity of (Loopless-)Katyusha does not contradict our lower bounds. It is worth emphasizing that (Loopless-)Katyusha requires that each component is $L'$-smooth (we adopt a different symbol here to avoid ambiguity). Then the gradient complexity and the communication complexity of Katyusha are indeed the same, i.e., $n + \sqrt{n \frac{L'}{\mu} }$. Meanwhile, one can check that the hard instance constructed for our lower bound only satisfies that each component is $c \sqrt{n} \delta$-smooth for some constant $c>0$ (if necessary, we will add the detailed computation in the later version). If $\delta = L$, applying Katyusha to our hard instance in fact yields the $n + n^{3/4} \sqrt{\frac{L}{\mu}}$ communication complexity instead of $n + \sqrt{ n \frac{L}{\mu} }$. This implies that for the special case with $L' = \Theta( \sqrt{n} \delta )$, Katyusha attains the optimal communication complexity. As a comparison, the optimality of our method in terms of communication complexity does not require the smoothness of component functions or the relationship between the smoothness and similarity constants. 3. We thank the reviewer for pointing out the two setups in distributed optimization. We consider both setups important. We only list the case of our setting. For example, in a business network or communications network, the communication between any two nodes could produce the charge and the risk, which indeed could not be viewed as one communication in parallel at all. Moreover, the entire parallel is not available since the environment behind each node is roughly different and complex. However, if the environment is similar, the parallel is possible and could save the total time. We will add some explanation in a later version. --- Rebuttal Comment 1.1: Title: Final decision Comment: Thank you! I quickly went through the comment. It seems that my questions have been addressed. Thank you for the explanation. I will slightly increase the score. Good luck with other rebuttals.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work considers finite-sum (distributed) optimization problems in the strongly convex and second-order similar regime. The authors proposed SVRS and its acceleration, AccSVRS, to solve the problem and provided the corresponding communication and computation complexities, which outperform existing works in several perspectives. The authors further characterized the lower bound for solving such problems, which validates the near optimality of the proposed AccSVRS algorithm. Strengths: 1. The study is comprehensive in general, covering both upper and lower bounds 2. Proposed algorithm achieves near-optimal communication/computation complexity. 3. The design of the algorithm, which incorporates PPA, VR, gradient sliding and Katyusha, is interesting. 4. The paper is well-written, and the flow is clear. Weaknesses: 1. The design of the AccSVRS, as the core algorithm achieving near optimality, can be further elaborated. For now, I do not have a clear understanding on Steps 5 and 6 in AccSVRS. Some discussions similar to Section 3.1 connecting Katyusha X paper will be appreciated. 2. Some more questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Line 232, you mentioned you recovered the optimal complexity of the average smooth setting. While with the additional AveSS setting, I may expect the complexity possibly can be (strictly) better than the classical $L$-smooth $\mu$-strongly convex case (assuming all components are $L$-smooth here for convenience). In your proof of Appendix E, you mentioned the obtained complexity is only smaller than that of the average smooth setting (Line 590), but should it be equivalence (rather than smaller only) here to be verified? 2. With the current result in Section 3.3, can we argue that possibly second-order similarity does not help in the gradient complexity of the classical case? 3. Line 591 in Appendix E (or Line 234), why do you drop the $O(n^{3/4}(L/\mu)^{1/4})$ term? Compared to the last $O(\sqrt{nL/\mu})$ term, is there a possibility that the $n^{3/4}$ term will dominate? Do I miss anything here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and suggestions. Here we list the reply point by point. 1. Reply to Weakness 1: We are sorry for the unclearness due to the space limit of pages. We need to recognize that Steps 5 and 6 in AccSVRS are somehow tricky, so we only list some shallow understanding here. **The difference between AccSVRS and KatyushaX is due to the different choice of distance space.** Recapping the connection with Bregman-SVRG in Section 3.1, we adopt the reference function $f_1(\cdot)+\frac{1}{2\theta}||\cdot||^2$ to control $f(\cdot)$ instead of $\frac{1}{2\theta}||\cdot||^2$ in the original SVRG, leading to the distance space in our proof is produced by the Bregman divergence of $h(\cdot)=f_1(\cdot)+\frac{1}{2\theta}||\cdot||^2-f(\cdot)$ shown in Lemma 3.1 and thereafter. Now we turn back to AccSVRS, which is motivated by the general framework of KatyushaX (see (4.1) in its arxiv version: https://arxiv.org/pdf/1802.03866.pdf). Thus you can see that AccSVRS (Alg. 2) shares the same structure as KatyushaX. The main difference is that in our setup, the gradient mapping step for computing $\mathcal G_{k+1}$ is not based on the original standard norm $\frac{1}{2\theta}||\cdot||^2$, but the space produced by Bregman divergence $D_h(\cdot, \cdot)$. Hence, the gradient mapping should be $\nabla h(x_{k+1}) - \nabla h(y_{k+1})$ instead of $\frac{x_{k+1} - y_{k+1}}{\theta}$. Next, noting that $\nabla h(x) = \nabla f_1(x) - \nabla f(x) + \frac{x}{\theta}$ could introduce the heavy gradient computing part $\nabla f(x)$, so we further adopt its stochastic version $\nabla f_1(x) - \nabla f_{j_k}(x) + \frac{x}{\theta}$ by uniformly sampling $j_k \sim \mathrm {Unif} ([n])$ to reduce the communication complexity. (P.S. Indeed, using $\nabla h(x)$ directly for computing $\mathcal G_{k+1}$ in AccSVRS is fine since the communication complexity of $\mathrm {SVRS^{1ep}}$ is $\Theta(n)$ when $p=1/n$. Here we employ a stochastic version to further reduce the communication complexity.) If the reviewer considers such an explanation helpful, we will add it to the later version. 2. Reply to Questions 1& 2: We consider there are some misleading in our description. **Our setting is unable to be compared with the classical smooth and strongly convex setting generally.** First, we reclarify the entire assumption of our problem: 1) the finite-sum objective is strongly convex, 2) the finite-sum objective satisfies average second-order similarity, and 3) the proximal operator of just one part (or each part) is approximately solvable, meaning that we can get an approximate solution of the proximal step (like Eq. (9) in our paper). And our main results are related to the communication complexity. As you can see, the computation complexity (i.e., gradient complexity) is not available in our main theorems since we do not know the gradient complexity of getting an approximate solution of the proximal operator without further assumptions. (P.S. Note that our lower bound is also in this framework, but we enhance 3) from approximately solvable to analytically solvable. This is fine since we consider the lower bound there.) Due to the importance of total computation in the machine learning and optimization community, we turn to consider a more common setup by assuming 4) each part (or just one part) of the finite-sum objective is smooth, which appears in many previous works (e.g., the references [29, 31] in our paper). The computation complexity in Sec 3.3 is a by-product of our results with a smaller function class due to the additional smoothness assumption 4). Meanwhile, compared to the classical smooth and strongly convex case, we further need to assume 2). Thus, we may wonder if the computation complexity in this stricter setting is *at least as good as* the optimal complexity excluding 2), since such a setting is well-studied. The discussion in Sec 3.3 only confirms that our algorithm could recover the optimal computation complexity of the classical (average-)smooth and strongly convex case *under some relationship between smoothness and similarity constant*. We do not know the optimal gradient complexity under 1), 2), and 4) up to now, but we are willing to see the relative work or try such a setting in the future. We agree with the reviewer's conjecture that the computation complexity may be better after assuming 2), but it is not able to conclude that second-order similarity does not help in the gradient complexity based on our paper. 3. Reply to Question 3: Thank you for your careful review even on the Appendix. Noting that $n+\sqrt{n L/\mu} \geq 2\sqrt{n \cdot \sqrt{n L/\mu}} = 2 n^{3/4}(L/\mu)^{1/4}$, we could drop the term $n^{3/4}(L/\mu)^{1/4}$ compared to the remaining term $n+\sqrt{n L/\mu}$ in the final complexity after hidding some constants. --- Rebuttal Comment 1.1: Comment: Thank you for the reply, my questions have been addressed. I will keep my score here.
null
null
null
null
null
null
Online Constrained Meta-Learning: Provable Guarantees for Generalization
Accept (spotlight)
Summary: The paper studies the problem of online meta-learning with constraints. After a formalization of the problem, the paper proposes an algorithm in the case where the loss function is convex, using Follow-the-Perturbed-Leader (FTPL) to update the meta-objective. Then the paper theoretically prove upper bounds on the regret and the violation of the constraints for their constrained learning setting. Finally, the paper presents experimental results on two different applications, meta imitation learning with collision avoidance and robust few-shot image classification. Strengths: ### Originality - The paper presents a detailed theoretical analysis in the constrained setting. - The algorithm is a combination of iMAML [1] using FTPL instead of FTL for the online setting. ### Quality - The paper presents and proves theoretical upper bounds in the constrained setting with their proposed algorithm, with a lot of theoretical developments in the appendix. It shows the soundness of their approach. ### Clarity - The formalization of the problem is clear. ### Significance - Results on the different benchmarks and applications presented show strong performance compared to baselines. [1]: Rajeswaran, A., Finn, C., Kakade, S. M., & Levine, S. (2019). Meta-learning with implicit gradients. Advances in neural information processing systems, 32. Weaknesses: ### Clarity - The relation of the proposed algorithm to previous work is quickly dismissed. The authors introduce online meta-learning algorithms in l.23-33, and in the related works section (l.88 - 98), the paper presents only optimization-based meta-learning algorithm. However, there is no discussion to clearly explain the differences with their proposed algorithm. - The way the tasks are set up in the applications are not very well described. Specifically in the robust few-shot classification benchmarks, it is not clear how the benchmark is adapted for this *online* setting, since these datasets are more commonly used for few-shot meta-learning. ### Quality - If I understand correctly, the approach presented is a combination of iMAML with FTPL for the online setting and with the constraint penalty. Thus, it would make sense to add other meta-regularization algorithm baselines in the benchmarks, such as iMAML at least. - In the robust few-shot classification application presented, the other baselines are not designed for *online* meta-learning. The comparison seems unfair. - The authors state that their approach speeds up the adaptation to new tasks (l.357), but we can see in Figure 3 (right) that it is the opposite, their algorithm takes more time to adapt. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would like the authors to describe how the methods are evaluated on the robust few-shot image classification benchmarks: - The models are trained on how many tasks ? - Are the tasks different between meta-training and meta-testing ? If so, how many meta-testing tasks are considered ? Otherwise, I would like to see results of online meta-learning methods on this benchmark. - It's not totally clear to me why the algorithm would be specific to the constrained case. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper does not include a discussion of the limitations of their approach. The algorithm presented seems to be more costly to run than online MAML. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and effort in reviewing our paper. We address your concerns as follows. >**Weakness of Clarity 1.** **Answer:** Sorry about the missing discussion. Here are the relation of the proposed algorithm to previous works and the intuition of our algorithm design. We will add them to our revised manuscript. In constrained meta-learning, the task-specific parameter should satisfy its given constraints. We employ the meta-regularization approach with hard constraints as the within-task algorithm, and use the follow-the-perturbed-leader to handle the meta-objective of sequential tasks. In the meta-initialization approaches shown in [38, 20, 21, 3, 32], as the within-task algorithm only takes a few optimization steps, even if we do optimization to reduce the constraint violation, the solution is far from feasible. On the other hand, the meta-regularization approaches shown in [15, 14, 43, 29] fully solve the within-task. To prioritize constraint satisfaction, we combine the meta-regularization approach with hard constraints for constrained meta-learning. >**Weakness of Clarity 2.** **Answer:** Sorry about the missing description. Here, we will show the experiment settings in the robust few-shot classification and how to modify them to the online setting. >> What are the online setting and the data setting for meta-training and meta-test. Take the mini-Imagenet dataset as the example (similar for the CUB dataset). The mini-Imagenet dataset includes 100 classes of images. We have 64 classes for training images, 16 classes for validation images, and 20 classes for test images. During the online meta-training, in each round, we sample a task only from the 64 training data classes and regard it as the revealed task, i.e., sample a 5-way k-shot task (5 classes and k images for each class). There are 200 rounds of online learning, and thus we sample 200 tasks from the training data. In the meta-test for Table 2, we use the test dataset with 20 classes. From the 20 test classes, we sample 600 tasks, i.e., 600 times of 5-way k-shot data from the 20 test classes, which means that the image classes in the 600 meta-test tasks are unseen in training tasks. >>How are the baseline methods adapted for this online setting. Since the baseline methods are offline, for comparisons, we use a similar way to Algorithm 2 (shown in Appendix A) to adapt them to the online setting. Specifically, in each round, we sample a batch of tasks from the revealed tasks, and use the gradient-based optimization method on their meta-objective functions over the data of sampled tasks. The modification from the offline baseline methods to their online versions exactly follows our approach in terms of optimization of the online meta-objective. Moreover, the dataset setting is the same for all baseline methods and our approaches. We will include the discussions in the revised manuscript. >**Weakness of Quality 3.** **Answer:** As the existing meta-regularization algorithms cannot handle neither the constraints for tasks nor online meta-learning (sequential task setting), we cannot directly compare our performances on the online constrained meta-learning problem. We compare our method with online-MAML. It is an online meta-learning method, and we add the constraint penalty loss to enable it to handle the constraints. As shown in the iMAML paper, the difference in the accuracy of MAML and iMAML is small (not larger than 1\%) in the two datasets. So, we think that online-MAML and an online version of iMAML will have comparable performance, and we only compare with online-MAML for convenience. >**Weakness of Quality 4.** **Answer:** As shown in response to weakness 2, since the baseline methods are offline, to do comparisons, we modify them to their online versions. The modification from the offline baseline methods to their online versions exactly follows our approach in terms of optimization of the online meta-objective. If it is the first time to study an offline learning problem, it is a standard practice to modify existing offline methods to solve the new online problem and compare the modified methods with newly developed methods [21][R2]. Thus, the comparisons are fair. [R2] Yao, Huaxiu, et al. "Online structured meta-learning." >**Weakness of Quality 5.** **Answer:** Sorry about the confusion. We want to claim that "Figure 3 shows that our method achieves a comparable adaptation time to with online-MAML while outperforming online-MAML in terms of test error and collision avoidance." We will modify the statement in our revised manuscript. >**Question 1. The models are trained on how many tasks.** **Answer:** As shown in response to weakness 2, the mini-Imagenet dataset includes 100 classes of images. We have 64 classes for training images, 16 classes for validation images, and 20 classes for test images. From the 64 training image classes, we sample 200 tasks (5-way k-shot learning tasks) for meta-training. From the 20 test image classes, we sample 600 tasks for the meta-test. So the test tasks used to test the performance are unseen in the meta-training phase. >**Question 2. Are the tasks different between meta-training and meta-test? How many meta-test tasks are considered?** **Answer:** As shown in responses to Weakness 2 and Question 1, the tasks between meta-training and meta-test are different. The test tasks used to test the accuracy (Table 2) are unseen in the meta-training phase. We have 600 tasks sampled from the test dataset as the meta-test tasks. >**Question 3. Why the algorithm would be specific to the constrained case.** **Answer:** Algorithm 2 (in Appendix A) uses the primal-dual approach to solve constrained optimization in Equation (3) (line 3 of Algorithm 1), and uses the constrained bilevel optimization analysis [50] to compute the gradient of the constrained bilevel objective function in Equation (4) and minimize the function (line 6 of Algorithm 1). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed answer and for the clarifications. I'm satisfied with the authors' answer, I don't have any more concerns and raise my score. I encourage the authors to add these discussions in the revised version.
Summary: In this paper, a novel online constrained meta-learning framework is presented. The framework is designed to facilitate continuous learning from sequential tasks while ensuring that these tasks adhere to strict constraints. In addition to existing analyses of meta-learning, this study goes further by presenting the upper bounds for optimality gaps and constraint violations that arise from the proposed framework. This framework takes into account the dynamic regret of online learning and the generalization capability of the task-specific models. Finally, the paper offers a practical algorithm to implement the framework, and its superior effectiveness is validated through experiments conducted in the domains of meta-imitation learning and few-shot image classification. Strengths: - This paper consider that the meta-objective is non-convex. - Study the dynamic regrets. - Two elaborate applications demostrate the superior effectiveness of the algorithm. Weaknesses: - The bound is scaled with $\mathcal{O}(\frac{1}{\sqrt{T}})$. This seems to ignore the size of every training datasets. - The comparion with existing online meta learning bounds is necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In section 2.1, the proposed constrained optimization paradigm in the paper requires that different tasks satisfy certain constraints on errors under any task-adaptive parameters. Intuitively, this may seem contradictory to improving the performance of specific tasks. Task-adaptive parameters should ideally be focused solely on specific tasks, so why is there a need to ensure the performance of other tasks simultaneously? Is it more reasonable to constrain the meta-parameters? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It should compare with existing online meta learning bounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and effort in reviewing our work. Thanks for your suggestions. We address your concerns as follows. > **Weakness 1. The bound is scaled with $\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$. It seems to ignore the size of training datasets.** **Answer:** The upper bound in Theorem 1 is $\mathcal{O}\left(\mathcal{S}^*\left(\mathcal{T}\_{1: T}\right) \sqrt{\frac{\ln \left|\mathcal{D}\_0^{t r}\right|}{\left|\mathcal{D}\_0^{t r}\right|}}+\sqrt{\frac{\ln \left|\mathcal{D}\_{+}^{t r}\right|}{\left|\mathcal{D}\_{+}^{t r}\right|}}+\sqrt{\frac{\ln \left|\mathcal{D}\_{0}^{val}\right|}{\left|\mathcal{D}\_{0}^{val}\right|}}+\frac{1}{\sqrt{T}}\right)$. As shown in Appendices E and F, the coefficients of the notations $\mathcal{O}$ are independent of the size of training datasets and only depend on some constants, such as the Lipschitz constant $L_0$. * The first term $\left(\mathcal{S}^*\left(\mathcal{T}\_{1: T}\right) \sqrt{\frac{\ln \left|\mathcal{D}\_0^{t r}\right|}{\left|\mathcal{D}\_0^{t r}\right|}}+\sqrt{\frac{\ln \left|\mathcal{D}\_{+}^{t r}\right|}{\left|\mathcal{D}\_{+}^{t r}\right|}}+\sqrt{\frac{\ln \left|\mathcal{D}\_{0}^{val}\right|}{\left|\mathcal{D}\_{0}^{val}\right|}}\right)$ depends on the size of training datasets. As shown in Equation (5) and Proposition 2, the term quantifies the generalization error when limited training data is given. * The last term $\left(\frac{1}{\sqrt{T}}\right)$ is independent of the size of training datasets. The term $\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$ is the gap of the meta-objective function $ \sum_{t^{\prime}=1}^{t} \mathcal{L}^{val}(\mathcal{A}lg(\lambda,\phi, \mathcal{D}\_{t^{\prime}}^{tr}),\mathcal{D}\_{0,t^{\prime}}^{val} ) $ (defined in line 193) between the meta-parameter $\phi=\phi_t$ produced by our algorithm and the optimal meta-parameter $\phi=\phi^*$. The size of the training dataset will not influence the gap of the meta-objective function values between $\phi_t$ and $\phi^{\*}$, because the values on both $\phi_t$ and $\phi^*$ use $\mathcal{A}lg$ in Equation (3) with the same training data (limited data size) to obtain the solutions. No matter whether the size of the training data is large or small, the size is shared for $\phi\_t$ and $\phi^{\*}$ and imposes the same error on $\phi\_t$ and $\phi^{\*}$. > **Weakness 2. Comparison with existing online meta-learning bounds.** **Answer:** In Table 1, we compare the used metrics in our paper and existing works [3] [14] [1] [21]. The metrics used in this manuscript are harder to quantify than most existing papers, in terms of constraint, generalization, and dynamic regret. Here, we compare our results with [3] [14] [1] [21] [R1]. * We consider the constraints in each learning task, and thus need to (a) quantify the constraint violations and (b) quantify the error on the loss function introduced by the inexact constraint approximation. These are not considered in all existing works. * If all the constraints are removed from our problem, our result $\mathcal{O}(\mathcal{S}^{\*}(p(\mathcal{T})) \sqrt{\frac{\ln{|\mathcal{D}\_{0}^{tr}|}}{|\mathcal{D}\_{0}^{tr}|}} +\frac{1}{\sqrt{T}})$ has the same order as the upper bound ${\mathcal{O}}(\ln(n) / \sqrt{n})+\mathcal{O}(1 / \sqrt{T})$ shown in [R1] and [14], in terms of the number of tasks $T$ and the number of the within-task data points $n$ or $|\mathcal{D}_{0}^{tr}|$. * Paper [3] does not consider the generalization error produced by the limited data size, and considers a strongly-convex meta-objective (ours is non-convex). Then, the bound is $\mathcal{O}(\mathcal{S}^{*}(p(\mathcal{T}))+\frac{1}{{T}})$, which is independent of $|\mathcal{D}_{0}^{tr}|$ and has the order of $\mathcal{O}(\frac{1}{{T}})$ because of the strongly convexity. * Paper [1] consider a strongly-convex meta-objective (ours is non-convex) and a static regret (ours is a dynamic regret) and has the order of $\mathcal{O}(\frac{1}{{T}})$. * In conclusion, for the degenerate case where the constraints are removed, our bound has the same order as the state-of-art works. If we further impose stronger assumptions (such as strong convexity) or consider a simpler metric (such as generalization ignored), the bound could be better. We will include the above discussion in the revised manuscript. Here is the reference. [R1] Denevi et al., 'Online-Within-Online Meta-Learning'. > **Question 1. In section 2.1, the proposed constrained optimization paradigm in the paper requires that different tasks satisfy certain constraints on errors under any task-adaptive parameters. Intuitively, this may seem contradictory to improving the performance of specific tasks. Task-adaptive parameters should ideally be focused solely on specific tasks, so why is there a need to ensure the performance of other tasks simultaneously?** **Answer:** In section 2.1 and other sections of the manuscript, the task-specific parameter always only needs to satisfy the constraints for its specific task, and doesn't require that different tasks satisfy certain constraints. In section 2.1, the task $\mathcal{T}_t$ is characterized by its data distributions $\mathcal{D}\_{t}=\{\mathcal{D}\_{0,t},\mathcal{D}\_{1,t}, \ldots, \mathcal{D}\_{m,t}\}$. Here, $\mathcal{D}\_{i,t}$ is the constraint dataset only for task $\mathcal{T}_t$, the constraint satisfaction of $\mathbb{E}\_{z \sim \mathcal{D}\_{i,t}}\left[\ell_i(\theta,z)\right] \leq c\_{i,t}$ with $\mathcal{D}\_{i,t}$ is specific for task $\mathcal{T}\_t$. From Equation (1), different tasks (e.g., $\mathcal{T}\_{t_1}$ and $\mathcal{T}\_{t_2}$) have different constraint datasets ($\mathcal{D}\_{i,t_1}$ and $\mathcal{D}\_{i,t_2}$ for $i$-th constraint), and should satisfy different constraint functions ($\mathbb{E}\_{z \sim \mathcal{D}\_{i,t_1}}\left[\ell_i(\theta,z)\right] \leq c\_{i,t_1}$ and $\mathbb{E}\_{z \sim \mathcal{D}\_{i,t_2}}\left[\ell_i(\theta,z)\right] \leq c\_{i,t_2}$). --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thanks for your detailed response. Could you provide more details on the question "Is it more reasonable to constrain the meta-parameters?" --- Reply to Comment 1.1.1: Title: Answer to the question ""Is it more reasonable to constrain the meta-parameters?" Comment: Each task $\mathcal{T}\_t$ is characterized by its loss function on the data distribution $\mathcal{D}\_{0,t}$ and its constraint functions on the constraint data distributions $\{\mathcal{D}\_{1,t}, \ldots, \mathcal{D}\_{m,t}\}$. The task-specific parameter $\theta^{\prime}\_{t}$ for task $\mathcal{T}\_t$ needs to satisfy its task-specific constraints. There is no common constraint shared by all tasks, and it is not reasonable to impose constraints on the meta-parameter.
Summary: The paper studies the theory of biased-regularization meta-learning under the sequential task setting. Despite there having been previous works in this area, this paper distinguishes itself by introducing the concept of the Online Constrained Meta-Learning problem and presenting a straightforward solution. It applies constrained optimization with biased meta-regularization, utilizing Follow-the-Perturbed-Leader (FTPL) FTPL to handle the non-convex meta-objective function, providing theoretical analysis and proofs of upper bounds, and developing a practical algorithm for large-scale problems. Empirical experiments validate the effectiveness of the proposed algorithm in meta-imitation and meta-reinforcement learning. Strengths: - This paper first studies the online constrained meta-learning, which is rarely concerned. To this end, this paper gave the formal problem formulation of constrained sequential learning. - The author distinguishes the assumptions proposed by this paper and follows others. It makes it easier to examine the assumptions applied in this paper. - A solid work that follows the setting of learning with biased regularization. The author gives a detailed discussion of different cases. Weaknesses: - Of a particular relevant missing work, [1] can also be deemed as the constrained (conditional) meta-learning. The author should further discuss this work since [1] also shows the generalization results. - More implications are needed to explain the results, i.e., Corollary 1. - The bounds of the derived results have not been examined. For instance, the second term in RHS of Prop 3 is the order of $\mathcal{O}(d^2\mathcal{B} L_0^2 \ln |\mathcal{D}^{tr}_0|)$, which can be a dominate term. Furthermore, the diameter $\mathcal{B}$ and model size $d$ can also be large enough. This result may not be applied to overparameterized settings. - The relationship between $\mathcal{D}_t$ and $\mathcal{D}$ is vague. Suppose $\mathcal{D}$ refers to any $t$ in $\mathcal{D}_t$. The statement of Propositions and Theorems should clarify this point. - From the Proofreading of Appendix 4, I found the bounds in Proposition 3 & 4 depend on many terms. However, in Prop.1 & 2, the author removes the small terms (in Big-O notation) without discussing the orders of these terms in what limits. ——Minors—— - Confusing statement about “Problem (1), (3)”. Since the author refers to Equations (1) and (3) as “Algorithms” and “Problems” simultaneously, it may be better to define the “Problems” officially. - Adding a notation cheatsheet in the appendix may be more readable. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Why define $D(\phi, \mathcal{T}_{1:T})$, $\mathcal{S}^*(\rho(\mathcal{T}))$ in square root form? - What will happen if we only learn the same task, i.e., $\mathcal{S}^*(\rho(\mathcal{T})) \to 0$, then the coefficient will be infinitely large $\lambda \to \infty$. Do the results still hold true in such a degenerate case? - Is Assumption 1 should hold $\forall t$? If it is, please make it clear. - As mentioned above, what does big-O notation mean in the results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A pure theoretical work. **References**: [1] Denevi, Giulia, Massimiliano Pontil, and Carlo Ciliberto. "The advantage of conditional meta-learning for biased regularization and fine-tuning." Advances in Neural Information Processing Systems 33 (2020): 964-974. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and effort in reviewing our work. Thanks for your suggestions and reference recommendation. We address your concerns as follows. > **Weakness 1. Discussion of connections with conditional meta-learning.** **Answer:** Thanks for the reference. Constrained meta-learning and conditional meta-learning aim to achieve different goals. In meta-learning, the goal of the within-task is minimizing the expected loss. Meta-learning aims to learn a shared meta-parameter for all tasks that can improve the learning of new tasks. In conditional meta-learning, the goal of the within-task is minimizing the expected loss, which is the same as meta-learning. Different from meta-learning, conditional meta-learning aims to learn a map from the task information to its meta-parameter to facilitate within-task learning, rather than learning a shared meta-parameter for all tasks in meta-learning. In our constrained meta-learning, the goal of the within-task is minimizing the expected loss while satisfying imposed constraints. The constraints are not included in meta-learning and conditional meta-learning. The approaches of solving them are also different. > **Weakness 2. More implications of Corollary 1.** **Answer:** Here is the implication of Corollary 1. We will include the discussion in the revised manuscript. Corollary 1 considers that the revealed tasks are sampled from a static task distribution. In this case, when $T$ is sufficiently large, the online meta-learning algorithm degenerates to an algorithm for offline meta-learning. If we further ignore the constraints in our problem setting, the bound of Corollary 1 holds the same order as the upper bound shown in [14]. > **Weakness 4. The relationship between $\mathcal{D}\_t$ and $\mathcal{D}$.** **Answer:** Sorry about the confusion. We denote ${D}(\phi, \mathcal{T}\_{1:T})$ as the parameter distance, denote $\mathcal{D}\_t$ as the data distribution for task $\mathcal{T}\_t$, and denote the edge length as $D$. These notations are too close. We will modify ${D}(\phi, \mathcal{T}\_{1:T})$ to $\mathcal{D}ist(\phi, \mathcal{T}\_{1:T})$, and modify $D$ to $D\_l$ in the revised manuscript. > **Weakness 5 and Question 4. The Big-O notation with respect to what limits/ what does big-O notation mean in the results.** **Answer:** We consider the Big-O notation only with respect to the limits of (i) the data numbers including $\|\mathcal{D}\_{0}^{tr\}|$, $\|\mathcal{D}\_{0}^{val\}|$ and $|\mathcal{D}\_{+}^{tr}|$ (ii) the task similarity $\mathcal{S}^{\*}( \mathcal{T}\_{1:T})$ (iii) the task number $T$. > **Weakness 6. Refer about Equations (1) and (3) as “Algorithms” and “Problems” simultaneously.** **Answer:** Sorry about the confusion. Equation (1) is a problem. Equation (3) is an algorithm to approximate the solution of Equation (1). Equation (3) includes an optimization problem. We will clarify them in the revised manuscript. > **Weakness 7. Adding a notation checksheet in the appendix may be more readable.** **Answer:** Thank you for the suggestion. We attach the notation list in the global rebuttal PDF file, and will add it to the revised appendix. > **Question 1. Why define $D\left(\phi, \mathcal{T}\_{1: T}\right), \mathcal{S}^{\*}(\rho(\mathcal{T}))$ in square root form.** **Answer:** Following [14], we would like to define $D\left(\phi, \mathcal{T}\_{1: T}\right)$ and $\mathcal{S}^*(\rho(\mathcal{T}))$ as metrics of the parameter distance, and thus consider the square root of the quadratic sum $\frac{1}{T}\sum_{t=1}^T \frac{1}{2}\|\|{\theta}^{\*}_t-{\phi}\|\|^2$. > **Question 2. What will happen if we only learn the same task, i.e., $\mathcal{S}^{\*}(\rho(\mathcal{T})) \rightarrow 0$, then the coefficient will be infinitely large $\lambda \rightarrow \infty$. Do the results still hold true in such a degenerate case?** **Answer:** The result still holds in the degenerate case. Here is the reason. When $\lambda \rightarrow \infty$, the term $\frac{\lambda}{2}\|\|\theta-\phi_t\|\|^2$ totally dominates in the objective function of the optimization problem of Equation (3), then the problem reduces to ${\theta}\_t = \mathcal{A}lg(\lambda,\phi\_t, \mathcal{D}\_{t}^{tr})=\arg\min_\theta \|\|\theta-\phi_t\|\|^2, s.t. \frac{1}{|\mathcal{D}\_{i,t}^{tr}|} \sum\_{z \in \mathcal{D}\_{i,t}^{tr}}\ell_i(\theta,z) \leq c\_{i,t}, \ i=1, \ldots, m. $ As we can see, the the solution ${\theta}\_t$ will not depend on $|\mathcal{D}\_{0}^{tr}|$ and only depend on $|\mathcal{D}\_{+}^{tr}|$, $|\mathcal{D}\_{0}^{val}|$, and $T$. This result corresponds to Theorem 1 with $\mathcal{S}^*(\rho(\mathcal{T})) \rightarrow 0$, where the result will not depend on $|\mathcal{D}\_{0}^{tr}|$ and only depend on $|\mathcal{D}\_{+}^{tr}|$, $|\mathcal{D}\_{0}^{val}|$, and $T$. So the results still hold in such a degenerate case. > **Question 3. Is Assumption 1 should hold for all $t$ ?** **Answer:** Yes, we will clarify it in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanations of my questions. I have decided to increase the score since the authors address my concerns.
Summary: The authors propose an online constrained meta-learning algorithm that is able to sequentially learn a sequence of tasks that are subject to hard (and stochastic) constraints. The authors also theoretically quantify the optimality gaps and constraint violations produced by the proposed method, by considering the dynamic regret of online learning and the generalization ability of the task-specific models. They also validate the effectiveness of the proposed method on numerical experiments on meta-imitation learning and few-shot image classification. Strengths: The authors validate their proposed method, both theoretically and experimentally. The authors present a new meta-learning method aiming at facing the more challenging situation in which the tasks are stochastically subject to constraints. Weaknesses: The statements and the notation of the paper could be simplified and made more intuitive, less heavy. Why can the constrained meta-learning framework be interesting in practical application? The authors do not well motivate the setting they consider. This is a very important aspect in my opinion. Could you please describe some examples of possible applications in which the proposed constrained setting can be useful/necessary? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The constraints are random variables. This seems to be problematic. Which is the main trick you use to deal with it? The authors use the notion of variance of the task optimal parameters to measure the similarity among the tasks, as in [14] and the paper [A] I mention here below. Could you please make a clearer comparison between your rates and those obtained in these other work, by just looking at the main important constants and leading terms w.r.t. the number of tasks and within-task points? Consider also the different assumptions. [A] Denevi et al., 'Online-Within-Online Meta-Learning'. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I do not see any potential negative societal impact related to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your time and effort in reviewing our work. Thanks for your suggestions and reference recommendation. We address your concerns as follows. > **Weakness 1. The statements and the notation of the paper could be simplified and made more intuitive.** **Answer:** Thank you for the suggestion. We will simplify the notations and attach the notation list in the global rebuttal PDF file, and will add it to the revised appendix. > **Weakness 2. Why can the constrained meta-learning framework be interesting in practical application and what are the motivation examples.** **Answer:** Constrained meta-learning learns a meta-parameter from existing constrained learning tasks, where each task needs to minimize its expected loss while satisfying its constraints. When a new constrained learning task is revealed, the meta-parameter is adapted to the new task, which improves the learning efficiency and reduces the expected risk and constraint violation on the new tasks. Existing meta-learning approaches can only learn from unconstrained learning tasks. Below are a couple of examples that motivate the constrained meta-learning framework. * One motivated example is imitation learning with collision avoidance in changing environments, as shown in our first experiment (Section 5.1). The expert performs demonstrations in a free space. The learner can observe the demonstrations and is asked to perform the task in a new cluttered environment quickly. The new environment is uncertain and unknown to the learner until the task is revealed. * Another motivated example is robot control with different dynamics. Consider a scenario that a robot needs to quickly deploy a new control policy once its dynamics change. The problem can be formulated as constrained meta-learning, where actuation limitations are imposed as constraints for the control policy. The within-task is to find the task-specific control policy that can minimize the control cost for the robot while satisfying the actuation constraints of the robot dynamics. The meta-algorithm is to optimize a meta control policy that adapts to the task-specific control policy once the new dynamics is given. We will include the above discussion in the revised manuscript. > **Question 1. The constraints are random variables. This seems to be problematic. Which is the main trick you use to deal with it?** **Answer:** * In constrained stochastic optimization, it is standard to define both the objective function and the constraint functions as functions of random variables [7][16]. * In our proposed algorithm, we use the sample average approximation method to approximate the expectation, i.e., using the empirical average of the constraint functions on the given training data to approximate the constraint function defined by the whole data distribution, as shown in Equation (3) (line 176 in Section 3). * As shown in Proposition 4 of Appendix D and Appendix C.2, we quantify the error between the sample average approximation and the expectation by analyzing the Rademacher complexity [7][16] of the constraint functions. > **Question 2. Comparison between your rates and those obtained in the related works [A][14].** **Answer:** **Comparison of assumptions.** Papers [A] and [14] assume the learning model is linear, i.e., $\mathcal{E}(Z)=\sum_{i=1}^n \ell_i(\langle x_i, w_i\rangle)$, while ours could be any function, such as a neural network. Our paper, [14] and [A] have similar assumptions about the convexity of functions. By adding a regularizer in the meta-objective function, the overall meta-objective in [14][A] is convex, while the meta-objective function in our paper is non-convex. **Comparison of metrics.** Paper [14] considers offline meta-learning and quantifies the expected optimality gap of the task-specific parameters over the task distribution. Paper [A] considers online meta-learning and the dynamic regret of the expected optimality gaps over sequential tasks. Similar to [A], we consider the dynamic regret of the expected optimality gaps. Beyond [A], we consider the constraints, and thus need to (a) quantify the constraint violations and (b) quantify the error on the loss function introduced by the inexact constraint approximation. **Comparison of rates.** If all the constraints are removed from our problem, our result has the same order as the upper bound shown in Corollary 7 of [A] and Theorem 6 of [14], in terms of the number of tasks $T$ and within-task points $n$. We will include the above discussion in the revised manuscript. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for their response. They answered my questions.
Rebuttal 1: Rebuttal: We are grateful and indebted for the time and effort invested to evaluate our manuscript by all reviewers, and for all the suggestions and reference recommendations to make our manuscript a better and stronger contribution. Please find below our detailed replies to all the comments of the reviewers. We notice that some reviewers suggest to attach a notation checklist. We attach a notation list in the global rebuttal PDF file for the convenience of our discussion and also add it to the revised appendix. Pdf: /pdf/092b1f78d5108e745c0d27fd8b81d39551c8ad11.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Taming Local Effects in Graph-based Spatiotemporal Forecasting
Accept (poster)
Summary: This paper proposes a method to leverage local effects in graph-based spatio-temporal forecasting. The authors claim that existing spatio-temporal graph neural networks are global models, i.e. all nodes share the same set of parameters, and thus may fail to capture some node-specific patterns. On the other hand, local models, in which some layers within the models are node-specifically parameterized, have better performances compared to global ones, but at the cost of many additional parameters. The authors find a method, random node embeddings, to strike a balance between local and global methods. The authors also propose regularizations to improve transferability of the node embeddings and the resulting models. Experiments over real-world data are given where the proposed method achieves consistent improvements over a variety of models and datasets. Strengths: 1. The studied problem is interesting. It challenges the assumption that a shared global STGNN is used for all nodes, which is standard in previous works. 2. The proposed technique, i.e. trainable node embeddings, is simple and sound. The regularization terms designed also make sense. 3. The proposed technique with trainable node embeddings are effective over various models (DCRNN, AGCRN, GWNet) and real-world datasets, which shows the generality of the proposed technique. 4. The experimental results showing that node embeddings and regularizations are effective in terms of knowledge transfer are a plus. Intuitively, people may think that models with node specific parameters will not perform well in unseen nodes, but the results show the opposite. Weaknesses: 1. The proposed method with node specific embeddings is effective, but not new. Specifically, STID [40] proposes exactly the same technique in terms of trainable node embeddings. I am slightly concerned about whether the technical contribution meets the standard of NeurIPS with this existing work. 2. The fine-grained categorization about spatio-temporal graph neural networks do not seem necessary, e.g. T&S, TTS and anistropic VS istropic. I fail to see how introducing these concepts help better understand the paper, and thus I would suggest these parts be removed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I really like this paper and since it is in general clearly written, I do not have questions at this time. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and your positive opinion about our work. Please find our point-by-point answers below. > 1. The proposed method with node specific embeddings is effective, but not new. Specifically, STID [40] proposes exactly the same technique in terms of trainable node embeddings. I am slightly concerned about whether the technical contribution meets the standard of NeurIPS with this existing work. Similar node embeddings have been used in some architectures. In this paper, we rationalize the practice of introducing such trainable components by providing an explanatory framework for the observed empirical results. We believe our contribution fully meets NeurIPS standards as it sheds light on extremely relevant challenges of very popular architectures. In fact, we provide a comprehensive methodology accounting for local components in several settings and across different (global) models. Our framework allows the practitioner to take full advantage of such hybrid models in both transductive and transfer learning settings. > 2. The fine-grained categorization about spatio-temporal graph neural networks do not seem necessary, e.g. T&S, TTS and anistropic VS istropic. I fail to see how introducing these concepts help better understand the paper, and thus I would suggest these parts be removed. The introduction of the different model architectures and design choices is necessary to show that the issues related to global and local aspects are present across a variety of architectures. In other words, the introduced categorization of existing architectures was necessary to carry out a proper and comprehensive empirical evaluation of the phenomena studied in the paper. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledged. Comment: Thanks for your rebuttal. I like this paper and at this point I have no outstanding questions. --- Reply to Comment 1.1.1: Comment: Thank you for the feedback and the review!
Summary: This paper presents a methodological framework aimed at rationalizing the inclusion of trainable node embeddings in STGNNs for spatiotemporal forecasting applications. The authors examine the interplay between globality and locality in graph-based spatiotemporal forecasting and provide insights and guidelines for specification design. The paper demonstrates how incorporating trainable node embeddings in STGNNs can effectively combine the advantages of shared message-passing layers with node-specific parameters, while efficiently transferring the learned model to new node sets. The proposed framework is supported by empirical evidence and offers a principled approach for accommodating various node embeddings. Strengths: 1. The authors investigate the interplay between globality and locality in graph-based spatiotemporal forecasting, resulting in five major findings. 2. The paper illustrates how including trainable node embeddings in STGNNs can effectively combine the benefits of shared message-passing layers with node-specific parameters and efficiently transfer the learned model to new node sets. Weaknesses: 1. The paper is not well-organized, making it difficult to understand the main points and arguments presented. 2. The proposed framework adopts the TTS model as an STGNN, but some important TTS methods are not discussed in the related work, such as [1] and [2]. [1] Jianfei Gao and Bruno Ribeiro. On the equivalence between temporal and static equivariant graph representations. In International Conference on Machine Learning, pages 7052–7076. PMLR, 2022. [2] Da Xu, etc. Inductive representation learning on temporal graphs. In ICLR 2020. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: The instructions in line 43 and lines 179-180 may appear contradictory. Could you provide a more detailed explanation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review. Please find our answers below. > The paper is not well-organized, making it difficult to understand the main points and arguments presented. We did our best to make the structure of the paper easy to follow, an aspect that was appreciated by the other reviewers. Currently, the presentation is structured as follows: 1) template global architectures; 2) issues with fully global models; 3) introduction of hybrid global-local architectures (with empirical evidence of the benefits); 4) node embeddings (i.e., more efficient hybrid architectures); 5) overcoming the limitations of hybrid models in transfer learning settings; 6) complete empirical results. There’s always room for improvement and we are open to making adjustments based on more specific feedback that the reviewer wishes to provide. > The proposed framework adopts the TTS model as an STGNN, but some important TTS methods are not discussed in the related work, such as [1] and [2]. We indicated that the terminology was indeed adapted from [1], we will add a reference to [2] in the related works. However, it should be noted that both [1,2] focus on temporal graphs rather than on time series. Finally, note that although we use TTS models as one of the reference architectures, the focus of the paper is not on introducing a new architecture but rather to study the impact of local effects on existing ones. > The instructions in line 43 and lines 179-180 may appear contradictory. Could you provide a more detailed explanation? The contradiction is only apparent. The point is that global models are indeed (in general) more efficient than local approaches [line 43] as the total number of parameters to be learned is smaller. However, if local effects are present in the data-generating process, this advantage might be compromised [lines 179-180], as more parameters might be needed to properly model them. This trade-off is what motivates the introduction of the hybrid models in the paper. We will make this apparent contradiction explicit in the paper. Thank you for the comment. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledged Comment: Thanks for your rebuttal and I would like to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for the answer. As far as we understood the main issue preventing a higher score was the organization of the paper, do you have any specific feedback about what is still unclear and on how we could improve? Have we satisfactorily addressed weakness 2 and the apparent contradiction in lines 43/179-180?
Summary: This paper examines the interaction between global and local effects in graph-based spatiotemporal forecasting. It addresses the limitations of a single global model by introducing a framework that incorporates trainable node embeddings into graph-based architectures. This framework enables the learning of specialized components and combines the benefits of shared message-passing layers with node-specific parameters. Additionally, the framework facilitates model transfer to new node sets. The paper offers empirical evidence and provides guidelines for adapting graph-based models to the dynamics of each time series to improve prediction accuracy. Strengths: It is nice to see a paper that investigates the attribution of "local" and "global" learning in modeling spatial-temporal graphs. The evaluation is very comprehensive and the paper is very informative. It may have great impact that can benefit the broad community that researches on spatial-temporal graphs. Weaknesses: The paper tries to answer a set of very big questions ("local" vs "global"), which I feel could be too hard to find a concrete answer in a 9 pages conference paper. Similar questions can be asked for GNNs as well: Is message-passing more important or the node feature encoding more important? Should I go fully inductive like GCN, GraphSAGE? Or I just stick to non-inductive GNNs? Is the isotropic message-passing enough like vanilla GCNs, or do I need anisotropic message-passing like graph attention networks (GAT)? Do I interleave the MP layers with node encoding layers like most GCNs do? Or I should stack multiple node encoding layers before doing message-passing? I feel it is a little too overwhelming to answer all these questions at once. I really appreciate the authors' efforts to investigate these questions, but it feel less convincing when it fits into a 9-pages conference paper, that each claim will be supported by less empirical evidences. Sometimes I will question that, how does this claim hold for other applications when the nature of a problem changes, would the conclusions change? What's more challenging is that, these questions seem to not having a general answer that hold for all the applications, making it especially hard to draw conclusions by merely relying on empirical studies (or you will need a lot of experiments across much more domains). In general, it is overall a technical solid paper and an ambitious one as well. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have to admit that I do not fully understand the paper, though I tried to read through the paper multiple times, so it may be helpful for me to understand the key contributions of the paper if the authors can provide some information of the following: 1. **What is the guideline and main takeaway messages for the audience to design models for spatial-temporal graph learning?** I feel a little lost in a vast amount of information and empirical observations. 2. **How do we know which design is the best for an application? Isn't it application specific?** If we want to know whether the "local" or "global" components in a STGNN are more important for an application, we may want to try it out or are there any ways to know it beforehand? To the best of my understanding, it could be very different from applications to applications, since global information is more important for some of them while for others local information is more important. 3. **Could T&S-AMP design be a generic to go choice?** If we do not know the importance of global or local effect in an application beforehand, can T&S and AMP be an to-go option? Many well-established GNNs for dynamic graphs fall in this category, for example, use attention in MP (AMP) and use alternating message-passing layers and recurrent layers (T&S), e.g., Graph Recurrent Attention Networks (GRAN). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No limitations and negative societal impacts are left unaddressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and useful comments. We are happy that you found our paper interesting, please find our point-by-point answers below. > I feel it is a little too overwhelming to answer all these questions at once. [...] Sometimes I will question that, how does this claim hold for other applications when the nature of a problem changes, would the conclusions change?What's more challenging is that, these questions seem to not having a general answer. The paper tackles complex problems from different perspectives. In particular, we understand that the presentation of many different template architectures for STGNNs without providing guidelines on which architecture should be preferred in general might generate some confusion. However, we believe that introducing the many possible design choices is instrumental in showing that the issues related to global and local aspects (which are the focus of the paper) appear across the full spectrum of architectures used in practice. In such regard, we believe that the paper succeeds (through experiments on both synthetic and real-world datasets) in showing that locality and globality are crucial aspects in graph-based forecasting and in showing how local components can be effectively and efficiently introduced in otherwise global architectures. > Q1 What is the guideline and main takeaway messages for the audience to design models for spatial-temporal graph learning? The main takeaway messages are summarized in the 5 points in lines 64-75 in the paper. In particular, 1) local components can be crucial to obtain accurate predictions in spatiotemporal forecasting; 2) node embeddings can amortize the learning of such components in otherwise global architectures; 3) hybrid local-global STGNNs with node embeddings can capture local effects with contained model capacity and reasonably long input window; 4) node embeddings make adapting models to different scenarios more efficient; 5) structuring the embedding space allow for regularizing the forecasting model. > Q2 How do we know which design is the best for an application? Isn't it application specific? If we want to know whether the "local" or "global" components in a STGNN are more important for an application, we may want to try it out or are there any ways to know it beforehand? Indeed it is application-specific. While fully global models are more flexible, hybrid architectures often perform better in practice. As there is no definitive answer, we suggest trying both architectures to decide whether including the local components is worth the compromise in flexibility. That being said, from our experience in real-world applications, adding local components consistently leads to better performance. > Q3 If we do not know the importance of global or local effect in an application beforehand, can T&S and AMP be an to-go option? A global-local T&S-AMP model is indeed a solid choice if the final performance at task is the only concern. However, T&S-AMP models are more computationally demanding than TTS-ISO models, which can nonetheless provide good performance. The practitioner should decide how to balance performance at task and computational costs and should be aware of the impact on the final performance of components that take local effects into account. The latter aspect is, as already mentioned, one of the main takeaways of the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions about the paper. Since my score has already acknowledged the contributions of this work, I will keep it as it is. --- Reply to Comment 1.1.1: Comment: Thank you for the feedback and thank you again for the review.
Summary: In this paper, the authors explore the influence of locality and globality in graph-based spatiotemporal forecasting architectures. Existing spatiotemporal models are global trained on multiple multivariate timeseires, which can capture the strong dependency among individual nodes in a network. Standard local models such as RNNs learn each timeseries independently which lost the interaction information with other nodes, but are fitted solely on each individual trajectory resulting in good short-term prediction performance. Directly combine the predictions from global and local models would result in a large number of model parameters (introduced by individual models). The authors instead propose to use a learnable embedding vector to represent the locality for each node and incorporate it in the GNN message passing procedure. The guide the learning process for such node embeddings, the authors further proposed two regularization terms to make the model more generalizable, with the assumption that the underlying dynamics of nodes within the same network topology would not differ too much. Experiment result over several benchmark datasets show the proposed method is able to make better prediction results than compared baselines. Strengths: 1. The writing for this paper is very easy to follow 2. The idea to inject local information into existing global spatiotemporal models is interesting. 3. The experiments are comprehensive, though some baselines are missing. Weaknesses: 1. My major concern is the contribution/novelty of this paper. The authors propose to learn a node embedding to mimic the role of local models such as RNN trained on each individual timeseries. First of all, the node embeddings are static, whereas the output of RNN models are dynamic. Those local features for each individual node can changes over time which can be well-captured by any local models. Secondly, the learnable node embeddings seem to me are similar to those exogenous factors specific to each node, how to guarantee the learned embeddings would not serve as the same role as those exogenous factors? Finally, learning these embeddings would make the whole model not able to perform inductive tasks. When a new node/timeseries comes in, one needs to retrain the model instead of directly use the model to do the inference, opposed to existing spatiotemporal GNNs. 2. Also there are some missing baselines in terms of spatiotemporal GNNs, such as continuous graphODE approaches [1][2] and other discrete methods [3]. [1] Huang, Zijie, Yizhou Sun, and Wei Wang. "Learning continuous system dynamics from irregularly-sampled partial observations." Advances in Neural Information Processing Systems 33 (2020): 16177-16187. [2] Song Wen, Hao Wang, and Dimitris Metaxas. 2022. Social ODE: Multi-agent Trajectory Forecasting with Neural Ordinary Differential Equations. In Computer Vision–ECCV 2022: 17th European Conference. [3]Sanchez-Gonzalez, Alvaro, et al. "Learning to simulate complex physics with graph networks." International conference on machine learning. PMLR, 2020. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can the authors provide running time comparison during the testing stage, as the proposed method would need to retrain the model on unseen(new) nodes. 2. Can the authors visualize some of the learned local node embeddings and show some case study to interpret their semantic meanings? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: The authors have not discuss the limitations of their model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. Before providing point-by-point answers, we’d like to remark that the main contribution of our paper is not in introducing an architecture but rather in studying a crucial aspect of graph-based forecasting, i.e., the interplay of local and global aspects of time series forecasting in such architectures. We would appreciate it if the reviewer could reconsider their evaluation of the novelty/contributions of our paper in light of this. > The authors propose to learn a node embedding to mimic the role of local models such as RNN trained on each individual timeseries. First of all, the node embeddings are static, whereas the output of RNN models are dynamic. There might be some misunderstandings here. Node embeddings are learnable parameters and, as such, static once trained; the same holds true for the learnable parameters of an RNN, which are static as well; what is dynamic, instead, is the output of the models. Having node embeddings, rather than a fully local model, would only imply that most of the parameters involved in the dynamic processing of the data will be shared among time series. In other words, assuming the encoder of a global-local model is an RNN, the embeddings are passed as an additional input to provide localization, while the RNN parameters remain shared for all time series. > The learnable node embeddings seem to me are similar to those exogenous factors specific to each node, how to guarantee the learned embeddings would not serve as the same role as those exogenous factors? Embeddings are indeed used similarly to exogenous features and indeed exogenous features can be used to localize predictions. However, as specified in the paper, such features are often not available in practice, and node embeddings are far more flexible as the encoding is learned end-to-end, thus becoming part of the model’s parameters. Furthermore, structuring the embedding space allows for regularizing the local components of the model (as shown in Section 5.1 and the transfer learning experiments). > Learning these embeddings would make the whole model not able to perform inductive tasks. When a new node/timeseries comes in, one needs to retrain the model instead of directly use the model to do the inference, opposed to existing spatiotemporal GNNs. Yes, that is correct, adding learnable node embeddings makes the model not inductive. In this respect, a significant paper contribution is in showing that – with the proper regularizations – hybrid global-local models based on node embeddings can be adapted to new nodes using only a few observations, without training from scratch the full model. Finally, note that most of the state-of-the-art STGNN architectures are actually not inductive as they rely on some form of node identification (see, e.g., Graph Wavenet, AGCRN, etc.). We show that with our simple approach, we can get similar performance and that the resulting model can be easily transferred by fine-tuning only a very small number of parameters, drastically reducing sample complexity. > Also there are some missing baselines in terms of spatiotemporal GNNs, such as continuous graphODE approaches [1][2] and other discrete methods [3]. In the empirical section, we focused on SOTA architectures for the benchmarks and types of problems considered in the study. However, even if the baselines suggested by the reviewer have been developed in a different context, we think it is indeed worth discussing them in the related works section. We will do so in the revision of the paper, thanks for the suggestion. > Q1 Can the authors provide running time comparison during the testing stage, as the proposed method would need to retrain the model on unseen(new) nodes? In the transductive setting, the models have the same computational costs as embedding results only in a small increase in the number of features and, as such, their impact w.r.t. time complexity is negligible. In the transfer learning setting, the computational cost is again exactly the same at inference time. The only overhead is the cost of fine-tuning the model (no full re-training is needed), yet such cost does not depend at all on the methodology we propose, but on the complexity of the model being fine-tuned and on the number of observations available. Furthermore, fine-tuning the entire model – rather than the embeddings alone – can be more computationally expensive. Finally, note that fine-tuning needs to be performed only once and that the performance improvement w.r.t. the zero-shot model is very large, even for inductive models. > Q2 Can the authors visualize some of the learned local node embeddings and show some case study to interpret their semantic meanings? Fig. 1 in the paper provides a visualization of the time series associated with different clusters of embeddings and, commenting on the figure, in Sec. 7 we discuss how the emerging clusters elucidate the role of embeddings as local components in the forecasting architectures. In addition to that, we include a t-SNE visualization of the learned embeddings with the different regularization mechanisms and different settings in the pdf attached to the rebuttal. The results confirm how regularization allows for structure to emerge in embedding space. > The authors have not discuss the limitations of their model. Limitations are discussed throughout the paper. In particular, we highlight several times the limits of the hybrid global-local model in the inductive settings and provide an in-depth discussion on the issue in Section 5.1. We will improve the discussion on the limitations of our study and include a comment on possible future works in the conclusions of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. Most of my concerns have been addressed so I raised my score to 4. But I still have questions regarding learning node embeddings for each timeseries and inject it in a shared global model for making predictions. As the authors mentioned that the node embeddings is similar to those exogenous factors, but the latter one are usually latent. Then is the proposed learnable node embeddings can be interpreted as latent exogenous factors? It would be great if the authors can further summarize the differences between the two concepts. --- Reply to Comment 1.1.1: Title: Additional clarification Comment: Sorry for the confusion, here’s a detailed discussion regarding the difference between the two. In our framework, node embeddings are in fact node-specific learnable parameters of the model (a different vector for each node) trained end-to-end together with the other (shared) model parameters. Using these node-specific trainable vectors allows us for tailoring (localize, in the terminology of the paper) the model predictions w.r.t. each time series. A global model (i.e., a model with no parameter specific to any time series) would not explicitly account for possible node-specific characteristics (local effects). Implementation-wise, once trained, node embeddings are passed as further inputs to the model, similar to how exogenous variables are typically processed. Exogenous variables, however, are usually additional covariates alongside the target time series. As an example, an exogenous variable can encode the day of the week or the external temperature. Although exogenous variables can be processed as additional inputs to forecasting models for conditioning the predictions, similarly to node embeddings, these are external inputs provided to the predictor and not learnable parameters associated with a specific node. The difference between the two is, then, quite large; what we meant in our previous answer is that node embeddings are used similarly to exogenous variables as they provide conditioning on the predictions. Indeed, as the reviewer suggests, the learned embeddings can be interpreted as latent factors conditioning the predictions. This interpretation motivates the regularizations proposed in Sec. 5.1. However, such latent vectors are learned directly, by parametrizing them with a separate set of learnable parameters for each time series. Also, once trained, these latent vectors are static: they are not conditioned on the current input window. We are available for providing further clarifications if needed, thanks again for the feedback and the careful review. We hope this addresses the issues currently preventing the reviewer from recommending acceptance.
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We provide point-by-point answers to each reviewer and attach as supplementary results a visualization of the embedding space for different regularization strategies for load and traffic forecasting datasets (see the attached pdf for more details). We hope that the rebuttal clarifies all the raised issues. Pdf: /pdf/52f0edc567abdbb1d1992af7635c3c3807301308.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization
Reject
Summary: This work proposes Flag Aggregator (FA) for a more robust aggregation of gradient in data-parallel training. FA formulates gradient aggregation as a Maximum Likelihood Estimation procedure using Beta densities. Theoretically, FA is analyzed using techniques from convex optimization. Empirically, FA demonstrates decent performance against Byzantine failure for image classification tasks (esp. ResNet-18 on CIFAR10) on a 4-GPU cluster networked with 100GbE. Strengths: +. Proposed a simple Maximum Likelihood Based estimation procedure for aggregation purposes, with novel regularization functions +. Provided code for reproducibility +. Well-written: easy to follow Weaknesses: -. Marginal wall-clock time improvement, maybe due to heavy SVD overhead: e.g., Figure 10 -. Missing benchmark: 1. only two small models are evaluated (e.g., ResNet18 and 2-layer CNN), how about more models like RNNs and larger models like GPT2? 2. only image classification tasks are evaluated (e.g., CIFAR10 and MNIST), not even CIFAR100 nor full ImageNet, how about more tasks like language modeling? -. Missing modern cluster: 4-GPU cluster with one GPU per machine is not a modern setup for evaluating scalability of distributed training Technical Quality: 2 fair Clarity: 3 good Questions for Authors: *. What if the Byzantine workers send more than just "uniformly random gradients"; how will the FA perform? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q.** How about more models like RNNs and larger models like GPT2? And how about more tasks like language modeling? **Ans.** For Tiny Imagenet experiments in the supplement, we used ResNet-50 as a larger model. In order to store these larger models in the current (or larger) scale of our distributed setup of the experiments, we would require more GPUs that unfortunately we did not have access to. However, our contributions such as presenting a simple Maximum Likelihood Based estimation procedure for aggregation, and significantly better experimental results compared to several baselines still hold in various settings. The extent to which the benefits carry over to larger settings is still open. We hope that our contributions are well received in the research community so that it would open a door for larger (possibly industrial) scale evaluation. **Q.** What if the Byzantine workers send more than just "uniformly random gradients"; how will the FA perform? **Ans.** In the main paper, we do have experiments with synthetic data (nonlinear data augmentation routines) and tolerance to communication loss where a percentage of gradients are dropped and zero-ed out at the parameter server. In addition, we have included a figure in the attached PDF for when Byzantine workers send a gradient based on the fall of empires attack with epsilon=0.1 [Xie et. al 2020] and when they send 10x amplified sign-flipped gradients [Zue et. al 2021]. We are happy to include these in the supplement. [Xie et. al 2020] Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation, UAI 2020. [Zue et. al 2021] Byzantine-Resilient Non-Convex Stochastic Gradient Descent, ICLR 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal with a detailed explanation. The authors have addressed my concerns to some extent through the response, so I will raise the score by one level. --- Reply to Comment 1.1.1: Comment: We appreciate you taking the time to review our response and raising your score. Please let us know if you have any further questions or need clarification. We are happy to address them.
Summary: This paper tackles the problem of Byzantine robustness in distributed learning by proposing a new robust aggregation rule called Flag Aggregator. The latter is based on maximum likelihood estimation with regularization. They empirically show that using distributed gradient descent with Flag Aggregator performs well against simulated Byzantine attacks compared to other existing solutions. Strengths: The problem of Byzantine robustness is important in distributed learning. Moreover, the proposed Flag Aggregator seems to follow a creative approach. Weaknesses: My main concern is the great lack of clarity of the paper, especially in the theoretical part. I also think that the theoretical and experimental parts lack several elements. * Lack of clarity: the paper has several clarity-affecting issues which makes it really hard to assess the technical contributions. * The paper starts (right away) with an unclear optimization problem (Equation 1): what are A, Y and C? * line 99: why is $Y Y^\top G$ a "reconstruction" of $G$? and what is meant by reconstruction exactly? * lines 100-103: I could not verify the stated claims/intuitions. * line 116: why does orthogonality imply efficiency? Authors seem to say that it is because we can derive a one-rank matrix factorization, but this does not require orthogonality of the matrix. In fact, $YY^\top G$ is just $G$ if $Y$ is orthogonal. * lines 123-135: this paragraph assumes that the reader knows what the Flag/Grassmanian manifold is, which was not the case for me. * Section 2.2: where does the vector $v$ come from? It is directly sent by the workers? Also, why do you assume that it follows a Beta distribution? * Algorithm 1: I could not find IRLS explained in the text. Also, it is strange that workers locally perform the update step. It always happens at server level in distributed SGD. * line 163: what is Flag Median? * line 188: what is a "second order optimal local solution"? * Lack of convergence guarantees: After all, a Byzantine-robust learning solution should have convergence guarantees, since simulated attacks are not guaranteed to be optimal; i.e. instantiate worst-case adversaries. Typically [Karimireddy et al. 2022, Allouah et al. 2023], convergence to a neighborhood of the original solution is ensured in the presence of Byzantine workers for smooth non-convex losses. * Experimental section: I suggest simulating more Byzantine attacks. The tested attacks (uniformly random vectors) are extremely weak compared to FoE [Xie et al. 2020], ALIE [Baruch et al. 2019] and others, which is unfortunate since the paper consider Byzantine adversaries. Also, some advanced defenses like NNM [Allouah et al. 2023] and Bucketing [Karimireddy et al. 2022] are missing although they were intended for non-iid; it is important to check how they perform against your method to assess the significance of the contribution. [Allouah et al. 2023] Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity, AISTATS 2023. [Karimireddy et al. 2022] Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing, ICLR 2022. [Xie et al. 2020] Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation, UAI 2020. [Baruch et al. 2019] A Little Is Enough: Circumventing Defenses For Distributed Learning, NeurIPS 2019. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: I suggest that the authors address the weaknesses listed above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q.** In Equation 1, what are $A$, $Y$ and $C$? **Ans.** $A$ denotes the aggregation function, $Y$ denotes the decision variable in the optimization problem, and $C$ denotes the desired constraints. The reviewer will note that later in the paper, we have explicitly defined what these are in equation (5) -- where $A$ is the sum of the loglikelihood and regularization terms and $C$ is the set of matrices with orthonormal columns. We are happy to clarify this near equation (5). **Q.** Regarding reconstruction at line 99 and orthogonality's efficiency implication at line 116. **Ans.** Given gradient matrix $G$ and subspace $Y$, projection of $G$ onto $Y$ is given by $\mathbf{P}=Y^TG$. The entry $\mathbf{P}_{ji}$ has the amount (measured using dot product) of $g_i$ along $y_j$. So, $YP$ gives us the reconstruction of $G$ using each column of $Y$. This two step corresponds to reconstruction of $G$ using $Y$. Formal proof: This is folklore result that can be found in various places, but we provide a formal proof here for completeness sake. By reconstruction, we mean that the matrix $YY^TG$ is the best (or optimal) $m-$rank reconstruction of $G$ -- here optimality is with respect to Squared $\ell_2$ norm which is also known as Mean Reconstruction Error (MSE). In detail, we are given with gradient matrix $G$, and $y_j,j=1,...,m$ such that $y_j$'s are orthonormal, that is, $y_j^Ty_{j'}=1$ if $j=j'$, and $0$ otherwise. Since each column of $G$ is multiplied by the aggregation matrix $YY^T$ separately, we consider each $g_i$ individually. (i) Case 1: \(m=1\), so we are given with just one \(y\) such that $\left\|| y \right\||_2=1$. Then projecting $g_i$ onto $y$ in MSE is the solution to a 1-d optimization problem: \begin{equation} \arg\min_{\mathbf{p}\in\mathbb{R}}\left[MSE(\mathbf{p}):=\left\||g_i-\mathbf{p} y\right\||_2^2 =\left\||g_i\right\||^2-2 \mathbf{p} g_i^{T} y+\mathbf{p}^2\left\||y\right\||_2^2\right] =\frac{g_i^{T} y}{\left\||y\right\||_2^2}=g_i^{T} y, \end{equation} where we used the fact that $\||y\||_2=1$ in the last line. So the reconstruction is given by scaling $y$ by the optimal $\mathbf{p}=g_i^{T} y$. It turns out that this calculation can be performed with each basis as we show in the next case. (ii) Case 2: $m>1$, so we are given $m$ pairwise orthonormal vectors and similar to previous case we have to determine the $m$ projection coefficients for each $g_i$. Given $g_i$, we determine $\mathbf{p}\in\mathbb{R}^m$ as follows: \begin{equation} \arg\min_{\mathbf{p}_1, \cdots, \mathbf{p}_m}\left[MSE(\mathbf{p}_1,\cdots,\mathbf{p}_m):=\left\|\left|g_i - \sum\_{j=1}^m \mathbf{p}_j y_j\right\|\right|_2^2=\left\||g_i\right\||_2^2-2 \sum\_{j=1}^m \mathbf{p}_j g_i^{T} y_j+\sum\_{j=1}^m \mathbf{p}_j^2\left\||y_j\right\||_2^2\right] \end{equation} where we used orthogonality relationship in the last equality. By setting $\nabla_{\mathbf{p}_j}(MSE)=0$ we see that the reconstruction problem decomposes to $m$ 1-d optimization problems each with closed form solutions $\mathbf{p}_j=\frac{g_i^{T} y_j}{\left\|y_j\right\|^2}=g_i^{T} y_j, j=1,\dots,m$ as in the previous case. So in this case, the reconstruction is given by $\sum_j{\mathbf{p}_jy_j}=\sum_jy_jy_j^Tg_i =Y Y^{T} g_i$ We hope this illustrates why we require orthogonality constraints since otherwise, reconstruction might be computationally expensive. Note that $Y^{T} Y=I$ does not imply $Y Y^{T}=I$ since $m<n$. In literature, the matrix $YY^T$ is often called the family of Projection matrices (not the $Y^TG$ as we do here) since $(YY^T)^2=YY^TYY^T=YIY^T=YY^T$ for any orthonormal $Y$. **Q.** Lines 123-135: Regarding Flag/Grassmanian manifold. **Ans.** We will add the necessary background in the appendix. **Q.** Section 2.2: Regarding v and if we assume that it follows a Beta distribution. **Ans.** As indicated, $v\in[0,1]$ is a value in between $0$ and $1$. Beta distribution is usually used in economics to model robustness which is one reason we chose it. Beta distribution can be used to model distributions with various types of information involving skewness which we believe is convenient since the priors are easy to set, which sometimes can be crucial for aggregation purposes. **Q.** Algorithm 1: IRLS explanation and where the update step is performed in distributed SGD. **Ans.** Please see the general response for a description of IRLS and its connection to Flag Aggregator. The model updates are done locally by the workers after they receive the aggregated update from the server. **Q.** Line 163: what is Flag Median? **Ans.** It is defined as a specific type median of subspaces as proposed in [Mankovich et al. 2022]. **Q.** Line 188: what is a ``second order optimal local solution''? **Ans.** Feasible points such that the Hessian has a nonnegative curvature -- all eigenvalues are nonnegative. **Q.** A Byzantine-robust learning solution should have convergence guarantees since simulated attacks are not guaranteed to be optimal; i.e. instantiate worst-case adversaries. **Ans.** Please note that the matrix $YY^T$ is a symmetric positive semidefinite matrix, so our method is guaranteed to converge whenever the original architecture algorithm converges since such matrices have eigendecomposition with all nonnegative eigenvalues. Intuitively, applying $YY^T$ on $G$ simply corresponds to scaling different parameter gradients after rotation with the eigenvectors of $YY^T$, similar to a precondition. **Q.** Regarding the experimental section. **Ans.** These papers are creative and very interesting. We will consider them in future work but there is a crucial difference: they do **not** formulate their aggregation scheme as an optimization problem that immediately can be transformed into a computational problem, as we have done. Moreover, these methods are often analyzed using sophisticated assumptions whereas convergence of our method can be guaranteed under standard assumptions in optimization literature. --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal and other reviews, and I am maintaining my score. --- Reply to Comment 1.1.1: Comment: We are grateful for your time in reviewing our response and other feedback. If you have any further questions or require clarification, please do not hesitate to inform us. We are happy to provide the answers you need.
Summary: Authors propose a gradient aggregation method for distributed optimization that is robust to Byzantine device failures in large scale distributed setups. In each round, given the set of gradients from each workers, the authors aim to find the optimal low-rank subspace that can explain the variance of a majority of the gradients. The authors formulate the problem as a MLE under a beta distribution setup and solve an approximate version of the problem though SDP. Strengths: Byzantine device failures is an important concern for large scale clusters. The presented method is well motivated theoretically and backed up with experiments comparing their robustness properties to other aggregation methods. Results demonstrate a significant advantage of this aggregation setup. Weaknesses: Although it is evident that Byzantine failures can have a significant impact on gradient computation if using simple aggregation rules, its unclear how often such failures happen in the cluster sizes the authors have considered. Augmentation pipelines induce their own noise to gradient information, but its unclear if these will be adversarial in _each_ update step. The amount of noise induced and its effect on adversarial training setups is also not evident. (See questions). This makes it unclear how the clear advantages of the method translates to real-world workloads especially considering that the method adds a potentially expensive top-k SVD computation step. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Frequency of Byzantine failures: Could you provide some insights into how frequent failures due to hardware/software/augmentation pipeline based issues occur in training runs. Assuming there will be at least a single byzantine worker at all times (i.e $f\ge1$ in Fig 4) seems too strong and can be better contextualized with some supporting evidence. - Scalability of method to federated clusters: Byzantine failures will potentially be a larger concern when training over heterogeneous hardware and partially available clients, for ex. in the federated clusters. Could the authors comment on the feasibility of running the method in such settings, considering that the majority of the computation is performed at the central server? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q.** Could you provide some insights into how frequent failures due to hardware/software/augmentation pipeline based issues occur in training runs. **Ans.** Training today’s large models is a very time-consuming task that can take days or even weeks. An important problem is the fact that failures inside the training environment, e.g. a datacenter, impede the progress of distributed training. An analysis from Microsoft that spans across two months and uses around 100k jobs run by hundreds of users was presented in [Jeon et. al 2019]. As a high-level summary, jobs using more than 4 GPUs, finish unsuccessfully at higher rate due to various reasons (Section 4.2). When the jobs fail, they waste a lot of computing time which is also analyzed in another study from Facebook [Eisenman et. al 2022] on a system comprising 21 clusters over a period of one month. It is important to note that these training jobs interact with multiple systems during the training process, such as accessing training samples from a separate reader cluster. Consequently, any failure within these interconnected systems will impede the overall progress of the training. **Q.** Could the authors comment on the feasibility of running the method in federated clusters setting, considering that the majority of the computation is performed at the central server? **Ans.** In Fig. 9 we tested the scalability of FA to a larger cluster within our hardware resources. For federated clusters, our one-cluster setup could be extended in a hierarchical architecture where we would have gradient-computing workers sending their results to the representative aggregating workers (which play the role of a PS for that cluster). Aggregating workers would further combine the partially aggregated results with other clusters representatives in another level of hierarchy. This allows scaling FA to federated clusters, however, the implementation of this approach is beyond the scope of our paper. [Jeon et. al 2019] Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN Training Workloads, ATC 2019. [Eisenman et. al 2022] Check-N-Run: a Checkpointing System for Training Deep Learning Recommendation Models, NSDI 2022. --- Rebuttal Comment 1.1: Title: Re Comment: Thank you for your comment. I maintain my positive assessment of the work. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response. We appreciate your positive assessment of our work. We are more than happy to answer any further questions or clarification you might have.
Summary: The paper proposes a new appraoch for aggregating gradients for distributed ML training under Byzantine failures, noise due to data augmentation, etc. The approach relies on constructing a low-dimensional subspace such that the proportion of variance of the gradient vectors contained in the subspace is maximized. The authors derive the loss function for their setting and formulate the problem as a regularized convex optimization problem which can be solved with standard solvers to obtain the basis for the subspace. The update direction is then obtained by projecting the individual gradients onto the basis and then averaging the result. Experiments on different datasets and number of workers show improved prediction accuracy over baselines when distributed training is performed using the proposed method. Strengths: 1. The proposed approach is principled and easy to interpret as it tries to identify the subspace which contains the maximum proportion of the variance of the gradients and is also easy to implement due to its formulation as a regularized convex optimization problem which can be solved by off-the-shelf workers. 2. The approach is extensively evaluated on a range of datasets (MNIST, CIFAR10, tiny-Imagenet) and for different noise models (random noise, adversarial data augmentation etc). I also appreciate the authors presenting results on wall-clock time to accuracy and per-iteration time thereby acknowledging the extra time required per iteration in their approach to compute the aggregated gradients. This opens the door to future research on speeding up the proposed aggregation method while retaining the accuracy gains. Weaknesses: 1. My main concern with the approach is its novelty. Since the goal appears to be to estimate the subspace containing the maximum proportion of gradient variance, I am not sure why this cannot be done by retaining the top-k Principal Components of the gradients. The authors even acknowledge in line 109 that the idea to use the ratio of variance of projected and true gradients has been explored in the Robust PCA literature. However, they do not explain why simply considering the principal components will not work, nor do they perform experiments with PCA/Robust PCA as baselines. I would like to see at least one of the two (explanation/experiments) to be convinced of the need for the proposed approach and its gains over PCA. 2. The extra computational cost and the added time per iteration as seen in Fig 10 (b) is also a weakness. While I do appreciate the authors measuring and presenting this time, it is not clear at this point if the accuracy gain justifies the extra time. One way to demonstrate this would be to allow the other approaches to run for the same amount of time in Fig 10 (c). If it could be shown that even after running for that long these approaches cannot match the accuracy of Flag Aggregation, then the extra time required could be justified. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could you please explain more clearly (preferably with an example) why adversarial training could lead to noise in gradients? The current explanation in lines 59-61 is too vague and high-level. Clarifying this would be useful for readers not familiar with the adversarial training literature, and would strengthen the motivation of the approach. 2. Please introduce/explain the term Flag Optimization before using it in line 75, or add a citation since I don't think readers outside the optimization community would be familiar with this term. 3. Fig. 5 seems to suggest that Flag Aggregation is only useful for bs >= 128? Is that indeed the case? What is the value of bs in the other experiments? 4. Likewise Fig. 6 seems suggest that it is useful only for p >= 11. Please clarify if that is indeed the case. Note that, gains only in certain regimes of bs and p won't necessarily be a reason for rejection. But it is important to acknowledge it in the paper so that the readers have all the information. 5. In line 119 you mention that gradient quality from workers may differ if workers use different batch size. I think this is a very interesting and practical scenario. Did you perform any experiments where workers used different batch sizes? Will it be possible to present the behaviour of Flag Aggregation and other baselines in this scenario? 6. Can methods from randomized linear algebra, or other approaches to speed up SVD help in reducing the per-iteration time of your approach? If yes, it might be worth mentioning this in the paper as an option for readers looking to implement your approach. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I feel the main limitations of the work are the increased computation time per iteration and the lack of clarity on novelty w.r.t PCA. I appreciate the authors' acknowledgement of the higher per-iteration time and look forward to their responses to the other limitations that I have identified under Weaknesses, above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q.** Why simply considering the principal components will not work? Did we perform experiments with PCA/Robust PCA as baselines? **Ans.** As explained in the general response, mathematically, one iteration of FA with uniform weights assigned across all workers is equivalent to PCA. The main novelty in our FA approach is the extension of PCA to an iteratively *reweighted* form that is guaranteed to converge. Specifically, we show that we obtain a convergent procedure in which we repeatedly solve weighted PCA problems. Moreover, the convergence guarantee immediately follows when the procedure is viewed as an IRLS procedure solving the MLE problem induced by the value of workers modeled with a beta distribution as in Sec 2.2. We added a baseline for top-m principal components of the gradient matrix in Fig 1(b) of the attached pdf to the global response. **Q.** It is not clear at this point if the accuracy gain justifies the extra time. One way to demonstrate this would be to allow the other approaches to run for the same amount of time in Fig 10 (c). If it could be shown that even after running for that long these approaches cannot match the accuracy of Flag Aggregation, then the extra time required could be justified. **Ans.** Thank you for your suggestion. Although FA gains are becoming visible towards the end of Fig 10(a) which is the zoomed-in version of Fig 10(c), we’re happy to let the other approaches run equally as long in terms of wall clock time, and we show the consistency of our results for this longer timespan in Fig 1(a) inside the attached PDF. We can also include this figure in the final version of the paper if needed. **Q.** Why adversarial training could lead to noise in gradients? **Ans.** Intuitively, the goal of adversarial training seems to be to make models predict all the nearby samples accurately, given the training set. The so-called Adversarial samples are typically constructed by introducing small imperceptible perturbations to the original data that would lead the model to make incorrect predictions. These perturbations are calculated based on gradients obtained from the model itself with respect to training set samples. Due to the complexity of the models and the non-linear nature of the deep network functions, or as training proceeds, it gets more challenging to find such adversarial samples. Recent technical results indicate that there are randomized algorithms that provide adversarial robust guarantees in expectation *only*. Hence, these randomized algorithms, by design, have a failure probability. Our method could be used when the knowledge of how these adversarial sample-generating frameworks behave, is not fully understood and the generated models are difficult to understand. We are very keen on exploring this aspect in our future work! **Q.** Introduce/explain the term Flag Optimization before using it in line 75, or add a citation. **Ans.** Thank you for pointing this out. We will add a citation to clarify this before explaining more on line 123. **Q.** Fig. 5 seems to suggest that Flag Aggregation is only useful for $bs \geq 128$? Is that indeed the case? What is the value of bs in the other experiments? **Ans.** We have an experiment that discusses the utility of larger batch sizes at line 268. As mentioned on line 236, the batch size across experiments is fixed to $128$ unless otherwise stated. **Q.** Fig. 6 seems to suggest that it is useful only for $p \geq 11$. Please clarify if that is indeed the case. **Ans.** Our FA framework does not require that $p \geq 11$ and is not a specific choice for our design. As mentioned in Section 3.1 (Testbed), from a technical perspective related to our hardware resources, we instantiate $p=15$ workers unless otherwise stated. Our intention of having smaller $p$ values in the experiment related to Fig. 6 was to evaluate the marginal utility of having more workers under a fixed amount of noise $(f=2)$. We could not test for $p \leq 11$ using other baselines such as Bulyan (which requires $p \geq 4f+3$ as mentioned on line 193 for its best performance), so we decided to leave out those experiments. We will clarify this in the experiment. **Q.** Did you perform any experiments where workers used different batch sizes? Will it be possible to present the behavior of Flag Aggregation and other baselines in this scenario? **Ans.** Thank you for your suggestion. In our experiments, batch sizes are fixed across workers. However, our current framework allows using different batch sizes in workers at line 3 using: (i) the average of local gradients at worker $i$, and/or (ii) directly adding them in the SVD computation. Our experiments are with local averaging since we have limited GPU availability at our disposal. **Q.** Can methods from randomized linear algebra, or other approaches to speed up SVD help in reducing the per-iteration time of your approach? **Ans.** Yes, thank you for your suggestion. We will clarify this more in section 4 of the paper. Please refer to our general answer for more detail. --- Rebuttal Comment 1.1: Title: Re Comment: Thank you for the response. I am satisfied with the responses to all my questions, and I also appreciate the effort put into the additional experiments performed to substantiate your claims in response to the points I had identified under weaknesses. I would definitely recommend including these plots in the final version of the paper or appendix, if accepted. I am increasing my score to 6 (Weak Accept). I do not have any other questions or concerns. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response and for increasing your score. We are pleased that our responses met your expectations, and we will definitely incorporate the recommended plots in the camera-ready version.
Rebuttal 1: Rebuttal: We thank the reviewers for spending time going through our submission in great detail, very insightful comments, and also pointing to aspects in the presentation style that can be improved. We are glad that the reviewers find our subspace based aggregation algorithm to be novel, can be derived using standard maximum likelihood estimation principles, and is currently unavailable in the context of training deep learning models. Here below we answer two questions that we find in various flavors in some reviews, and then answer all individual questions below individual reviews. We also provided additional empirical results that were requested by reviewers on some more attacks and baselines in the pdf. **Q.** Can you provide a brief summary of IRLS procedure for Flag Aggregation in the main paper? **Ans.** Yes! IRLS is a standard optimization technique in which we substitute general norm functions with weighted euclidean norm functions. The key advantage of this substitution is that we may obtain closed form solution to the substituted euclidean norm version. Starting from a (random) feasible point $Y_{\text{old}}$, the weights are calculated with the general norm functions. Then, the solution $Y_{\text{new}}$ to the weighted euclidean norm optimization is obtained. This corresponds to one iteration in IRLS and repeating the above step with this new $Y_{\text{new}}$ corresponds to the IRLS algorithm. For aggregation purposes in FA, in each iteration the square root function or more generally, the $a-$th root function in equation (4) is replaced by reweighted quadratic function which has a closed-form solution given by SVD. Specifically, in FA, $Y_{\text{new}}$ is calculated by solving the lagrangian equation (14) {\bf in supplement} which is equivalent to computing the Singular Decomposition of a matrix defined using $GDG^T,\lambda \nabla\mathcal{R}(Y_{\text{old}})$. For our data-dependent regularization $\mathcal{R}$, this eigenvalue computation is equivalent to SVD of $G$ concatenated with weighted columns of $g_i-g_j$ (as was done for individual $g_i$'s). The proof of this equivalence can be seen in [Mankovich et al. 2022], for example where left singular vectors of $GD^{1/2}$ are used in the solution procedure. For mathematical norms mentioned in the main paper including elementwise norms involving $\ell_1$, we handle it columnwise wrt $Y\in\mathbb{R}^{n\times m}$ since each column of $Y$ corresponds to a basis vector in $\mathbb{R}^n$ of the $m-$dimensional subspace. By using a smooth approximation to $\ell_1$, for example as in Sec 1.2 in [Ene et al. 2019], we can see that a quadratic approximation is available $\mathcal{R}(Y) = \sum_{j=1}^my_j^TD_jy_j$ where $y_j\in\mathbb{R}^n$ is the $j$-th column on $Y$ and $D_j \in \mathbb{R}_{>0}^n$ is a diagonal matrix with positive entries along the diagonal calculated using $Y\_{\text{old}}$, and so can be handled similar to the data-dependent regularization. We will make space and include a description in the main paper. **Q.** To implement FA, is it possible to take advantage of fast, randomized SVD solvers? If so, how? **Ans.** Yes, it is indeed possible to use existing solvers to solve FA for aggregation purposes. This is the main advantage of our FA algorithm. In detail, to calculate the left singular values of $GD^{1/2}\in\mathbb{R}^{n\times p}$, we use the fact that number of workers $p\ll n$ and solve the $p\times p$ eigenvalue problem which can be fast in practice. Upon receiving the right singular vectors, first order methods can be used to obtain the left singular vectors. In this sense, we can use any fast, randomized SVD algorithm to solve for the right and/or left singular vectors. [Mankovich et al. 2022] The Flag Median and FlagIRLS, CVPR 2022. [Ene et al. 2019] Improved Convergence for $\ell\_1 $ and $ \ell_{\infty} $ Regression via Iteratively Reweighted Least Squares Pdf: /pdf/b78c76e72b98369756d5d2e8c654d500a1faa752.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a new method to aggregate gradients in a distributed training setting. Effectively, the proposed algorithm projects gradients onto a learned lower dimensional subspace and then aggregates the projections using standard techniques like averaging. This leads to a more robust aggregation against Byzantine failures. The projection is similar to a robust PCA, and is learnt using an approximate MLE via a Taylor expansion, leading to a computationally more feasible algorithm. Thorough experiments are conducted that demonstrate the efficacy of the proposed algorithm. Strengths: 1. The proposed novel algorithm empirically performs better than existing methods when measured by iteration complexity. 2. The authors provide a thorough comparison to existing methods and place their work in context. Weaknesses: 1. The exposition in the paper lacks clarity in some places -- for example, the IRLS subroutine in Algorithm 1 is not described or even briefly summarized in the main paper. 2. The authors do not present their theoretical convergence results in the main body of the paper. 3. As pointed out by the authors, the main proposed algorithm does not seem to perform significantly better than other existing algorithms when comparing wall clock runtimes. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What exactly is the IRLS procedure in Algorithm 1? There is no description in the main body of this subroutine or procedure. 2. In algorithm 1, line 5, the procedure to find the approximate subspace $\hat Y$ is done only on the server. Presumably, this procedure would be the bottleneck in the whole algorithm, since it involves multiple SVD computations. Can the authors comment on the breakdown of which parts of algorithm 1 take significantly more time, and explain any optimizations they have implemented in this context? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors addressed the limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q.** What exactly is the IRLS procedure in Algorithm 1? **Ans.** Answered in the general response. **Q.** Can the authors comment on the breakdown of which parts of algorithm 1 take significantly more time, and explain any optimizations they have implemented in this context? **Ans.** This is a great question. When using FA for the aggregation phase of distributed SGD, the computation cycles are mostly spent on the IRLS procedure at line 5 of Algorithm 1. Specifically, calculating the SVD of $G^TD^{1/2}$ (or eigenvalues of the matrix $G^TDG$) contributes to most of these cycles. For more detail, please refer to our general response. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have gone through the rebuttals and the other reviews, and will keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We appreciate your time in reviewing our response and other feedback. Please do not hesitate to let us know if you have any further questions or need clarification. We are happy to answer them.
null
null
null
null
null
null
FAST: a Fused and Accurate Shrinkage Tree for Heterogeneous Treatment Effects Estimation
Accept (poster)
Summary: This paper proposes a novel strategy for estimating the heterogeneous treatment effect called the Fused and Accurate Shrinkage Tree (FAST). The authors confirm the consistency of the proposed tree-based estimator and demonstrate the effectiveness of their criterion in reducing prediction error through theoretical analysis. The advantages of the proposed method over existing methods are demonstrated via simulations and real data analysis. As I am not very familiar with this field, it might be better to consider my opinion less. Strengths: 1. This paper is technically sound. 2. The proposed method has better performance than the existing methods. Weaknesses: Imcomplete references: The tree-based method seems to not be a new method in this field. There might be some other references as follows. Agarwal, Abhineet, et al. "Hierarchical Shrinkage: Improving the accuracy and interpretability of tree-based models." International Conference on Machine Learning. PMLR, 2022. Nasseri, Keyan, et al. "Group Probability-Weighted Tree Sums for Interpretable Modeling of Heterogeneous Data." arXiv preprint arXiv:2205.15135 (2022). Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your instructive and detailed review comments. We are encouraged by the overall positive responses from the comments and suggestions. Our point-to-point responses to your comments are itemized below. **Q1:** Thank you for highlighting the potential absence of references. This suggestion greatly contributes to our effort to conduct a more comprehensive literature review. After a careful reading of the two papers, we find them indeed closely related to our work. Specifically, Agarwal et al. (2022) also developed a shrinkage method to improve the performance of the tree estimator. One of the main differences is that our procedure shrinks the original estimator to a (potentially) biased estimator with small variability obtained from extra data sources, namely the observational data, while they propose to shrink the estimates at each leaf node to the sample means of its ancestors. Nasseri et al. (2022), on the other hand, generalized the tree-based methods to address the challenge of heterogeneous data from diverse data sources. It is worthwhile to note that both methods can be readily applied together with our shrinkage strategy, as long as a sufficient amount of (potentially biased) observational data is available. We would discuss these aspects in the revised version of the paper if given a chance. --- Rebuttal Comment 1.1: Comment: Thank you for your response. This address my concern. Since I am not expert in this field, I will keep my score.
Summary: The paper deals with the problem of estimating the heterogeneous treatment effects with multiple data sources. In particular, the paper aims to utilize the information from the observational data to better estimate the causal effects in the trial data. Inspired by the shrinkage estimation, a weighting scheme is developed to balance the unbiased estimator based on trial data and the potentially biased estimator based on observational data. Specifically, a tree-based algorithm with new split criterion is proposed based on above motivations. Some theoretical results about the causal effect estimation is derived. Finally, the author provides simulations and real data analysis to demonstrate the performance of the proposed method. Strengths: The papers deal with an interesting problem in practice, i.e., data fusion. In particular, we may have multiple data sources. However, some sources has limited observations with unbiased causal effects and other sources have sufficient observations with biased causal effects. The paper utilizes the tree-based algorithm with a new splitting criterion to tackle this issue. In addition, some theoretical analysis are provided to prove the advantages of the method. Weaknesses: 1. Although the paper considers an important problem in practice, the reason why the method chooses tree-based algorithm is not convincing. In particular, other ML methods can also achieve data fusion. The advantages of using tree-based algorithm over other methods are confusing and are not clearly discussed. 2. In the introduction and simulation studies, the author also mentions many other methods that deal with the data fusion problem. However, the reason why the proposed tree-based method can perform better than other methods are not interpreted. 3. Although the paper provides the theoretical analysis for the causal effect estimation, the interpretation for the theorem are not convincing. For example, many other methods also have theoretical guarantee for the causal effect estimation. Which part in the theoretical analysis can illustrate the advantages of data fusion and the tree-based algorithm? 4. In the general picture, the idea of data fusion is very similar to that of transfer learning, i.e., we want to transfer the information from the observational studies to help the estimation of causal effects in trial data. However, the paper does not mention any related literature in transfer learning. In particular, what is the advantages and differences of the method compared with transfer learning? A more comprehensive literature review is encouraged. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. When the bias of the causal effect estimation in observation studies is large, will the data fusion cause any harm to the estimation of the causal effects? Can it be interpreted as a trade-off between the bias and variance? 2. In section 2.1, the author mentions $\hat{e}(X,1)$ is unbiased while $\hat{e}(X,0)$ is biased. Why is that? Didn't they come from the same data source with $\hat{e}(X,1) + \hat{e}(X,0) = 1$? 3. One of the main questions is that, why using tree-based methods for data fusion? 4. In real data analysis, the author mentions that the true causal effect is unknown and hence using the estimation of the generalized random forest as the ground truth. This is confusing. If the estimation using random forest is not accurate, then the results in the real data analysis is not reliable and convincing. How do the author correct this potential bias? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see in Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful review. Our point-to-point responses are as follows and we would add the additional discussions in the revised manuscript if given a chance. **W1&W2&W3&Q3**: Thanks for raising the issues concerning tree-based methods. As mentioned in the abstract of the paper, one of the key contributions of the paper is the establishment of a new data fusion framework called shrinkage estimation. The second is the implementation of a tree-based method by developing an adaptive fusion criterion in Section 3.2. And we chose tree-based methods to show the core idea of the shrinkage estimation framework for two main reasons: - the advantages of tree models on tabular data, e.g., robustness to features and model interpretability; - The core of this fusion framework is to estimate the variance and bias, which further facilitates the estimation of $w^*$. Although the estimator mentioned here can be any ML model e.g., neural network (NN), many of them cannot conveniently calculate the estimator’s variance. For the tree model, the variance $σ_u^2$ of the trial estimator can be easily estimated based on the observations that fall into the corresponding tree leaf. Besides, the rfFAST does not require any parametric models on the data generation mechanism, which ensures its robustness. In comparison, existing methods in the literature usually rely on parametric assumptions. This may explain the better performance of the rfFAST over other existing methods. Finally, for the theoretical analysis, as stated in Line 210 of the manuscript, "we formally establish the benefits of the proposed split criterion (9) compared with the conventional criterion (7)", which can be specifically listed as follows: - Theorem 1 shows that our fused estimator with the novel proposed criteria enjoys a uniform MSE reduction property compared to the conventional method using only the trial data. - It further takes finite sample variations into account and establishes non-asymptotic bounds for the excess risks of the empirical solutions of the conventional and proposed tree split criteria, respectively, ensuring that the MSEs of the estimated tree are close to those of optimal solutions under the population level. **W4** Thanks for your suggestion regarding a comparison between transfer learning (abbr TL) and causal data fusion. Intuitively, if we simply consider observational data as the source domain and trial data as the target domain, the trial and observational data fusion scenario can be regarded as a domain adaption problem. However, there are a few significant differences between these two concepts. - Our purpose is to identify cause-and-effect relationships between different variables, while traditional TL focuses on the predictive task. However, The main motivation of TL is to leverage the labeled data of the target domain to improve the performance of the prediction task (Pan and Yang, 2010; Weiss et al., 2020). - TL aims to learn the shared knowledge of domains while distinguishing the specific knowledge. In comparison, the prerequisite for causal data fusion is that both the trial data and the observational data share the same causal effect function, as stated in Assumption 1. Besides, in general, the ground truth is inaccessible for causal data fusion. **Q1** Indeed, the core idea underlying the shrinkage method is closely related to the classical "bias-variance" trade-off in ML. Recalling some notations: $$σ_u^2=Var(\hat{τ}_u^2),b=\mathbb{E}(\hat{τ}_b-\hat{τ}_u), w^*=\frac{σ_u^2}{σ_u^2+b^2},$$ and the MSE of the optimal fused estimator $\hat{τ}\_{w^*}=w^*\hat{τ}_b+(1-w^*)\hat{τ}_u$ can be expressed as $MSE(\hat{τ}\_{w^*})=(1-w^*)σ_u^2$. From the expression of $w^*$, it becomes apparent that the optimal weight seeks an equilibrium between the squared bias $b^2$ of the observational estimator and the variance $σ_u^2$ of the trial estimator. Also, it is worth mentioning that this interplay between bias and variance is easily achieved through our shrinkage method for real applications, as outlined in the paper. **Q2** The symbol $S$ (the second parameter of the function $\hat{e}$) serves as an indicator, and $S=1$ means that the individual is sampled from the trial population otherwise from the observational population. And $e(X,U,S)=P(D=1|X,U,S)$ means the conditional probability of being selected to the treatment group of an individual. First, as outlined in Lines 72 to 74, "In practice, Due to $U$ being unknown, we usually use $\hat{e}(X,S)$ to estimate $e(X,U,S)$. In addition, $\hat{e}(X,1)$ is unbiased for the randomization of trial data, but $\hat{e}(X,0)$ is biased because the unmeasured confounder $U$ is related to the assignment of treatment $D$". In general, $\hat{e}(X,1)+\hat{e}(X,0)$ means $P(D=1|X,S=1)+P(D=1|X,S=0)$ which doesn't equal to $1$. **Q4** We first trained a generalized random forest using the full STAR dataset. We then regarded the resulting estimator as a surrogate of the underlying ground truth. The reasons are listed below. - It is worth mentioning that a generalized random forest (GRF) estimator is consistent under mild conditions (Athey et al., 2019), meaning that as the sample size increases, it converges to the true causal effect. In our case, since the full dataset is collected from a randomized controlled experiment, then the estimation error of the GRF estimator is expected to be quite small given its sample size. - Unlike other fields of machine learning, in causal inference, the ground-truth causal effect is typically inaccessible. From this perspective, a certain degree of approximation is required. And similar approaches have been used in the literature (Kallus et al., 2018). **Some Refs:** - Pan, S. and Yang, Q. (2010) A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359. - Weiss, K., Khoshgoftaar, T. M. and Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1), 1-40. --- Rebuttal Comment 1.1: Comment: Thanks for the authors providing the reply to my questions. For our concerns and questions about why using tree-based models in data fusion, although the authors list some advantages for the tree-based methods, most of them are from general perspective. It is still lack of sufficient support about why using specific tree methods in data fusion. In addition, for the concern about the true value of causal effects, it is still questionable that whether the estimation from GRF is correct. Although GRF has theoretical guarantee, we cannot make sure it happens in real data analysis. So, the comparison results may be lack of strong conclusions. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. **1. For your concern regarding the use of tree-based methods:** Besides the general benefits of tree-based methods as outlined in the previous reply, we pointed out in second point of the first answer, that the greedy and local averaging nature of tree-based algorithms makes it extremely simple to implement the shrinkage estimation framework, as the estimator of the key gradient $w^*$ can now be easily obtained. This computational and notational simplicity help the readers better understand the core idea of the shrinkage concept. **2. For your concern regarding the validity of the full-sample GRF estimator in the real-data analysis:** - Firstly, its correctness can be empirically verified via the numerical results of the real-data analysis presented in Figure 3 of the manuscript: Except the SF (simple fusion) estimator, which is not consistent and performed badly in the simulations, the mean square differences between the remaning estimators and the full-sample GRF estimator did exhibit a downward trend as the sample size of the unbiased data increased. These trends couldn’t exist if the full-sample GRF estimator was wrong. - Besides, as metioned in the comments and reply, the GRF estimator is consistent for unconfounded data. That is to say, its estimation error vanishes as the training sample increases. Thus, given the large training sample size of the full data, the estimator should be accurate from a theoretical perspective. - Lastly, since the ground-truth of causal effect is inaccessible in real applications, a certain degree of approximation is necessary. And some recent existing works applied similar approaches to construct surrogates for the inaccessible ground-truth (Kallus et al., 2018; Wu et al., 2022). We would add these discussions in the revised version if given a chance. And we would kindly ask you for more specific questions concerning the proposed method itself, so that we could continue our revision, and we are willing to provide further explanations. Your continued guidance is greatly appreciated.
Summary: The authors propose a novel shrinkage method that fuses an unbiased estimator with a biased estimator. This method effectively reduces the MSE of the unbiased estimator. The approach offers a practical and straightforward implementation specifically tailored for estimating heterogeneous treatment effects. The authors extend the conventional node split criterion to align with the fused estimator and penalizes the use of observational data with substantial confounding bias. The authors also provide a theoretical analysis that explains the advantages of the modified splitting criterion. Strengths: - The application of the weighting strategy from shrinkage estimation to fusing unbiased and biased estimators in order to reduce the MSE of the unbiased estimator is a great idea. - The modification of the node-splitting criterion that aligns with the fused estimator is an excellent enhancement to the methodology. - The paper is well-organized and thanks to the authors's thoughtful and consistent notation, the methodology is easy to follow. Weaknesses: I'm concerned with the omission of $\sigma^2_b$ in practice. If we only look at the weight $w$, it make sense if $\sigma^2_b$ is small comparing to $\sigma^2_u$. However, in the tree building process, we need working estimates of the MSE of fused estimator, $\frac{(\sigma^2_b+b^2)\sigma^2_u}{\sigma^2_b+b^2+\sigma^2_u}$, and I don't think omitting $\sigma^2_b$ is justified by the same reason anymore. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The sample sizes considered in the experiment are always large. If a sample from RCT of size 100 or 200 is avaliable, the baseline models would do the job. I'm more curious in the senario when we have a relatively small RCT sample and that we do need more data from obervational studies to enhance the estimation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No other limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your instructive review comments. We are greatly encouraged by the overall positive responses from the comments and suggestions. Our point-to-point responses to your comments are itemized below and we would add those discussions in a revised manuscript. **W1:** Thanks for pointing out the issue. The MSE of the fused estimator can be equivalently expressed as \begin{equation} \frac{(\sigma_b^2 + b^2)\sigma_u^2}{\sigma_b^2 + b^2 + \sigma_u^2} = (1 -\frac{\sigma_u^2}{\sigma_b^2 + b^2 + \sigma_u^2})\sigma_u^2 = (1-w^*)\sigma_u^2,\nonumber \end{equation} so working estimates of the MSE of the fused estimator amount to estimating the optimal weights $w^*$ and the variance of the trial estimator $\sigma_u^2$. Thus, we would think that the same reasoning can be applied to both the estimation of $w^*$ alone and the MSE of the fused estimator. This also can be considered to be a significant benefit of applying the proposed shrinkage method: in real applications, once the optimal $w^*$ is estimated, one can immediately obtain an estimate of the MSE of the corresponding fused estimator. Furthermore, the relative performance improvement of the fused estimator over the original trial estimator is readily at hand -- it is expected to be exactly $w^*$. **Q1:** Thanks for raising this issue. Indeed, as mentioned in your comment, one motivation for developing data fusion methods is to tackle the inadequacy of the RCT sample in real applications. Following your suggestions, we have enriched the original Table $1$ (see Table 1 in the global response) by adding a scenario when the trial sample size $n$ is reduced to $50$. We added the subscripts ''NF'' and ''SF'' to represent the trial-data-only and the simple fusion method (the two simple fusion methods were also included for completeness), which makes it easier to compare the performances of all the methods presented in the Table. When $n=50$, the $\mathrm{rfFAST}$ still dominantly outperformed other data fusion methods and led to at least a 20 percent improvement in performance compared with its no fusion counterpart $\mathrm{GRF}_{NF}$.
Summary: The paper introduces a Fused and Accurate Shrinkage Tree (FAST) algorithm for heterogenous treatment effect estimation given trial and observational data. The FAST algorithm introduces (i) a shrinkage based approach that combines trial and observational data for MSE reduction in treatment effect estimation, and (ii) a split criteria which down-weights observational data with high confounding bias. Further, the paper provides theoretical analysis demonstrating the benefits of the proposed split criteria. Experimental results on synthetic and real-world data demonstrate that the proposed approach outperforms baselines per metric MSE. Strengths: - The paper is well written and easy to follow. The reviewer enjoyed reading this paper. - The proposed FAST algorithm is well-justified and the theoretical analysis might be of interest to some readers. - The paper tackles an important problem (fusing small RCT with large readily available observational data ) with many applications. - The algorithm seems simple and easy to implement Weaknesses: - Eqn. 5: The paper seems to have glazed over the rationale for dropping $\sigma_b$ in the shrinkage estimator. It's unclear in what scenarios, e.g., how large the observational sample size must be for $\sigma_b$ to become negligible. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Experiments** - Table 1: Could you include results for the NF estimator (using trial data only) and SF estimator (both trial and observation data) - Figure 3: Could you comment on why NF is not monotonically decreasing with sample size? Shouldn't we expect the rFAST to NF estimators to cross when $n$ is large enough? **There are some limitations with tree-based methods that the paper does not adequately address:** - How sensitive is the FAST algorithm to the dimension $p$ and heterogeneity of the covariates? The experiments set $p=5$, which is unrealistic in real-world scenarios. - What is the complexity, e.g., training time of the rFAST algorithm compared to baselines, and how easy is it to scale given high-dimensional heterogeneous covariates? Minor: - Figure 1: Could you add more details summarising the plots in the caption Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The limitation discussion is inadequate. I encourage the authors to add a paragraph discussing the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your inspiring review comments. We are greatly encouraged by the overall positive responses from the comments. Our point-to-point responses are itemized below and the references are listed in the end. We would add the discussions and numerical experiments to the revised version if given a chance. **W1:** Thanks for pointing out the missing details in dropping the $\sigma_b$ term. In fact, as pointed out in Section 3.1, both $\hat{\tau}_u$ and $\hat{\tau}_b$ are approximately sample means, so the variances of both estimators should admit: $$ \sigma_u^2 = Var(\hat{\tau}_u) = \frac{C_u}{n} \ \hbox { and } \ \sigma_b^2 = Var(\hat{\tau}_b) = \frac{C_b}{m}, $$ where $C_u, C_b$ are positive constants, and $n$ and $m$ are the sample sizes. Thus, if we denote the dropping-$\sigma_b$ version of $w^*$ as $\tilde{w}^*$, then $$ |1 - \frac{\tilde{w}^*}{w^*}| = |1 -\frac{b^2 +\sigma_u^2}{b^2 + \sigma_b^2 + \sigma_u^2}| = |\frac{1}{1 + m(\frac{b^2 + \frac{C_u}{n}}{C_b})}| = O(\frac{1}{m(b^2 + n^{-1})}). $$ Thus for a given tolerance $\epsilon > 0$, the condition for the obs sample size $ m > \frac{1}{\epsilon (b^2 + n^{-1})}$ suffices for $\sigma_b$ to become negligible. **Q1:** We have added the results of the SF estimators (the $HT\_{SF}$ and the $GRF\_{SF}$) in Table 1 of the additional PDF. Besides, both the $HT$ and $GRF$ estimators in the original Table 1 are in fact NF estimators. We added the subscript ''NF'' to address this aspect. From the new Table 1, the estimator $GRF\_{SF}$ performed inferior to the $GRF\_{NF}$ in all scenarios. And the single-estimator method $HT_{SF}$ exhibited smaller MSEs compared to the $HT_{NF}$ only when $\beta$ is small. Both SF estimators performed worse than the $rfFAST$ estimator. Second, unlike other fields of machine learning, the actual causal effect is typically inaccessible in causal inference. So we used the generalized random forest estimator trained on the full dataset as a surrogate for the underlying inaccessible ground truth. And similar approaches have been used in literature (Kallus et al., 2018). Thus, the nonmonotonicity is largely due to the intrinsic randomness of the data. Besides, the $\mathrm{NF}$ estimator in the Figure does show a downward trend, which is consistent with our simulation results if we check the column of the $\mathrm{GRF}_{NF}$ estimator of Table 1 of the additional PDF. Besides, as the theoretical MSE of $\mathrm{rfFAST}$ estimator is $(1-w^*)\sigma_u^2$ and that of the $NF$ estimator is $\sigma_u^2$, the proposed fused estimator $\mathrm{rfFAST}$ should always perform better than its no fusion counterpart $NF$. **Q2:**. First, since the $FAST$ is essentially a tree-based method, its sensitivity to the dimension $p$ and heterogeneity of the covariates largely resembles that of the classical decision tree. And one of the appealing advantages of tree-based methods compared with other nonparametric methods is that they can be flexibly adaptive to high-dimensional and complex features, as revealed both empirically (Archer et al., 2008) and theoretically (Chi et al., 2022). To see this, we conducted an additional experiment presented in Table 3 in the additional PDF. We did not include $SF$ methods to save space. And $rfFAST$ estimator was quite robust against increasing $p$. Second, we admit that our current code is only used to implement the method prototype in Python, and we have not done much optimization. And the actual running time of the algorithm depends on many factors, such as the different implementation languages (GRF is based on C++), histogram preprocessing acceleration like LightGBM, and parallelization. Therefore, here we analyze the theoretical time complexity. For data with $n+m$ samples and $p$-dimensional features, the time complexity of building a tree by our method is $O(p\cdot (m+n)\cdot\log((m+n))+p\cdot n\cdot\log(n))$, where the extra overhead compared to the decision tree is mainly due to calculating $w^*$, that is, the second term. But according to equation (2), $w^*$ has a closed-form solution, and the computational complexity is $O(pn\log n)$. Given that $n\ll m$, the overall computational complexity $O(p\cdot (m+n)\cdot\log((m+n))$ is approximately the same order as that of the traditional decision tree. Therefore, there is no significant difference between the two methods. **Q3**. We re-wrote the caption of Figure 1: ''The probability density functions (pdfs) of the unbiased estimator (pink) and the biased estimator (blue) in the left panel and the pdf of the shrinkage estimator under the optimal weight $w^*$ (green) in the right panel. The vertical dashed line represents the true parameter value $\theta^* =0$. ' **Q4**. We added a paragraph discussing the limitations: Our work also has several limitations. First, as mentioned above, our method currently can not provide a confidence interval for the fused estimator. Second, we opt for the mean square error (MSE) criterion to attain an optimal balance between the variance of the trial estimator and the bias of the observational estimator. This choice is motivated by the existence of a readily obtainable closed-form expression for the optimal weight $w^*$ under this criterion, thereby enhancing its interpretability. The consideration of alternative criteria for optimization remains untouched. **Refs:** - Archer, K. J. and Kimes, R. V. (2008). Empirical characterization of random forest variable importance measures. Computational statistics & data analysis, 52(4):2249–2260. - Chi, C.-M., Vossler, P., Fan, Y., and Lv, J. (2022). Asymptotic properties of high-dimensional random forests.The Annals of Statistics, 50(6):3415–3438 - Kallus, N., Puli, A. M., and Shalit, U. (2018). Removing hidden confounding by experimental grounding. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer tnj1 Comment: Thanks for addressing all my comments.
Rebuttal 1: Rebuttal: We would like to express our sincere thanks to all the reviewers for your insightful and constructive review comments and we are greatly encouraged by the overall positive responses. Based on your valuable suggestions, we have carefully done a round of revision of the manuscript. While detailed responses to individual points can be found in the subsequent rebuttals, we also seize this moment to summarize the progress achieved in improving this paper over the past week. - We improved the illustration of the core idea of the shrinkage estimation framework. - A more detailed explanation of the expression of the optimal weight was given and missing details of the derivations due to page limit were presented. - The algorithm complexity of the FAST estimator was analyzed. - The advantages of the proposed method over existing methods were interpreted and limitations were discussed. - Three additional numerical experiments were conducted (shown in the additional PDF file) to better validate the effectiveness of the proposed method. We would add all the revisions to the manuscript if given a chance. Thanks again for all your great efforts and contributions in jointly improving this work. Pdf: /pdf/b18c8c3a1e331cd3900af856c58d81ff0e8ce9f6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers the problem of estimating (heterogeneous) treatment effects via both interventional and observational data. The authors proposed a new estimation, namely the Fused and Accurate Shrinkage Tree (FAST), which optimally weights the interventional and observational estimator, and combines with a new spilt criteria for tree-based heterogeneous treatment effect estimation. The authors further conducted experiments to compare FAST against existing methods. Strengths: - Apart from a few typos, the paper is well written and ideas are presented in a rigorous but clear also way. - The idea of applying shrinkage method to combine interventional and observational data for better estimator is novel and could be a nice addition to the literature. - Note that I have not gone through all the proofs in the appendix, the mathematical correctness might need further input from other reviewers. Weaknesses: Generally I like this paper, but there are still a few weaknesses. - The main issue with shrinkage method is interpretability: we need to understand better how the variance-bias trade-off behaves in different regimes, especially the authors takes a more analytic way to first estimate the required quantities, then solve the optimal weights. More specifically for example, it would be beneficial to at least see different how trial mechanisms affects the estimation. For example, in a more realistic setting, one may consider non-randomized trials rather than RCT, in which treatments are assigned by a *known* true model. By adjusting the parameters of such true assignment model, the estimator variance for the trial population HTE estimator can be controlled (even with fixed N). Then the performance of FAST can be evaluated against different variance regimes of trial HTE estimator, which will help us understand the sweat spot of the method. - Regarding experimental settings. It is indeed quite standard for these type of papers to have 1 or 2 synthetic experiments and 1 real data experiment. However, in the case of this paper I found the simulation setting is a bit weak. It would be great to perform experiments on multiple data generating mechanisms with randomly sampled parameters and coefficients, allowing us to evaluate the marginal performance of the method. Otherwise the authors at most demonstrated the capability of the method on only one single data generation mechanism (which arguably is much easier to hack/cherry-pick). - The other potential room for improvement is the baseline. I understand that the paper mainly only compares to data fusion methods. However, due to the variance-bias trade-off of the shrinkage method, it would be natural to also expect some comparisons to variance reduction methods for trial estimators as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors has somewhat discussed the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful and comprehensive review. We are encouraged by the overall positive responses from the comments and suggestions. Our point-to-point responses to your comments are itemized below and we would add the discussions and numerical experiments in the revised manuscript if given a chance. **W1:** Our apologies for any confusion that might have arisen due to the unclear illustration. It is worthwhile to first note that for any regular unbiased estimator $\hat{\tau}_u$ based on the trial data and (potentially) biased estimator $\hat{\tau}_b$ based on the observational data, the variance of both estimators should admit the following expressions: \begin{equation} \sigma_u^2 = \mathrm{Var}(\hat{\tau}_u) = \frac{C_u}{n} \ \hbox { and } \ \sigma_b^2 = \mathrm{Var}(\hat{\tau}_b) = \frac{C_b}{m},\nonumber \end{equation} where $C_u, C_b$ are positive constants, and $n$ and $m$ are the sample sizes of the trial data and the observational data, respectively. Indeed, as mentioned in your comment, by applying various methods including adjusting the trial mechanisms (either randomized or non-randomized), one is able to control the variance for the trial population estimator $\sigma_u^2$ only through the constant factor $C_u$ without changing the order $n^{-1}$. But the implementation of our shrinkage method is independent of the trial mechanism. To see this, for a given trial mechanism and estimation method (mapping to a $C_u$), one can readily apply the shrinkage method to obtain the optimal fused estimator $\hat{\tau}\_{w^*}$, whose mean square error (MSE) is $(1 - w^*)\sigma_u^2$. Now, the relative improvement of our proposed estimator $\hat{\tau}\_{w^*}$ over the original trial estimator in terms of MSE is as follows: \begin{equation} 1 - \frac{\mathrm{MSE}(\hat{\tau}_{w^*})}{\mathrm{MSE}(\hat{\tau}_u)} = w^*, \hbox{ where } w^* = \frac{\frac{C_u}{n}}{b^2 + \frac{C_u}{n} + \frac{C_b}{m}}.\nonumber \end{equation} From the above equation, one can find that $w^*$ is not only an optimal weight facilitating the shrinkage estimation, but it itself also characterizes the extent of improvement in terms of MSE one can expect via incorporating the observational data. Thus, in real-world applications, one is able to get an estimate of the improvement of the procedure once $w^*$ is estimated. **W2:** In response to this suggestion, we designed the following two data-generating processes (DGPs). In both DGPs, we generated the pre-treatment covariates $X = (X_1, X_2, \cdots, X_p)^T$ from $\mathrm{Uniform}[-1,1]^p$, $U$ from $N(0,1)$, $D|(X,U, S=1) \sim Ber(0.5) $ and $\epsilon(d) \sim N(0,1)$. - (I) $$ Y(d) = d\tau(X,\beta_{\tau}) + X^T\beta_{\ell} + \beta_U U + \epsilon(d), \beta_{\tau} = (\beta_{\tau,1},\beta_{\tau,2},\cdots,\beta_{\tau,5})^T \sim N(1_{5\times 1}, 0.5^2I_5) , \beta_{\ell} = (\beta_{\ell,1},\beta_{\ell,2},\cdots,\beta_{\ell,p}) \sim N(1.5_{p\times 1}, 0.5^2I_5),\beta_U \sim N(1.5,0.5^2), $$ $$ \tau(X,\beta_{\tau}) = \beta_{\tau,1} + \beta_{\tau,2}X_1 + \beta_{\tau,3}X_1^2 + \beta_{\tau,4}X_2 + \beta_{\tau,5}X_2^2, D|(X,U,S=0) \sim Ber(1/(1+\exp(\beta U + \beta_oX_1))), \beta_{o} \sim N(1.5, 0.5^2) $$ - (II) $$Y(d) = d\tau(X, \beta_\tau) + \beta_{\ell,1}\sin{X_1} + \beta_{\ell, 2}\cos{X_2} + \beta U + \epsilon(d), \beta_{\tau} \sim Uniform(0.5,1.5)^p, \beta_{\ell} \sim N(1_{2\times 1}, 0.5^2I_2), $$ $$\tau(X,\beta_{\tau}) = 1 + \sum_{s=1}^p\beta_{\tau,s}\left(X_s+X_s^2\right), D|(X,U,S=0) \sim Ber(1/ (1+ \exp(\beta U + \beta_{o,1}X_1 + \beta_{o,2}X_2))), \beta_{o,i}\sim N(1.5,0.5^2).$$ It is noted that in (II), the factor $\beta$ appears in both $Y(d)$ and $D|X, U, S=0$. We repeated each DGP 100 times and the results are presented in Table 2 in the additional PDF. In both DGPs, the proposed $\mathrm{rfFAST}$ estimator performed the best in most scenarios. Furthermore, its advantage over other data fusion methods was more obvious when the trial sample size was small ($n=100$), which is exactly when data fusion is necessary to improve the causal effect estimation in real-world applications. **W3**. Thanks for your understanding of our focus on comparing with data fusion methods. Following your suggestion, we have done another round of literature review regarding variance reduction methods for trial estimators: - Li, F., Morgan, K.L. and Zaslavsky, A.M. (2018) Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. - Sturmer T. et al., (2021). Propensity score weighting and trimming strategies for reducing variance and bias of treatment effect estimates: a simulation study. American journal of epidemiology, 190(8):1659–1670. - Liao, J. and Rohde. C. (2022). Variance reduction in the inverse probability weighted estimators for the average treatment effect using the propensity score. Biometrics, 78(2):660–667. We believe that these references will definitely enrich our literature review and make it more comprehensive. However, two points are also worth mentioning: First, while indeed the $\mathrm{HT}$ estimator based on the inverse probability weighting (IPW) is sometimes considered to be unstable, the $\mathrm{GRF}$ estimator namely the generalized random forest estimator already have certain satisfying statistical properties verified both theoretically and empirically (Athey et al., 2019), where implicit variance reduction techniques have been applied. The other one is closely related to the core feature of our shrinkage method. That is, the proposed framework is essentially built upon the trial and observational estimators, namely the implementation of the procedure is independent of the construction of the trial estimator. In real applications, the practitioners can first construct a variance-reduced trial estimator designed for their particular scenarios, then estimate the optimal weight $w^*$ to facilitate the shrinkage estimation.
null
null
null
null
null
null
Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking
Accept (poster)
Summary: The authors suggest incremental updates of $\mathbf{x}$ for finding the minimum of strongly-convex function $f$ that guarantee decreasing $f(\mathbf{x}_{t})-f(\mathbf{x}_\ast)$ based on only 1st-order gradient information. Their idea is in each step, - choose a candidate matrix $\mathbf{P}_t$ based on set $\mathcal{S}_t$, and - check the condition (4), that guarantee sufficient decrease of the $f$, and - if the condition is satisfied: - apply update $\mathbf{x}_{t+1} = \mathbf{x}_t - \mathbf{P}_t\nabla f(\mathbf{x}_t)$ - $\mathcal{S}_{t+1} = \mathcal{S}_t$ - else - $\mathbf{x}_{t+1} = \mathbf{x}_t$ - update $\mathcal{S}_{t+1} = \text{cut}(\mathcal{S}_t, \mathbf{x}_t, \mathbf{P}_t)$ They provide proofs for - approaching the optimal in Proposition 3.2 - how to choose candidate and cut algorithm in Theorem 5.3, and its maximum number of calls. They conduct some simple experiments in Section 6, and show the proposed algorithm's efficiency. Strengths: originality - considering backtracking using preconditioned matrix $\mathbf{P}_t$ would be novel idea. But I'm not an expert of this field, and not so sure on the originality. quality - The proposed algorithm is supported by some proofs, and it shows good empirical results. clarity - Basically, the manuscript is readable. significance - The proposed algorithm seems be better than other baselines except for Diag. Hessian+LS, which uses information of 2nd order derivatives, i.e. Hessian, even the proposed algorithm uses 1st-order derivatives of $f$. It would be significant. Weaknesses: - The manuscript contains some typos. - I have a concern on Figure 5. The horizontal axis shows number of f/grad evals, but I guess the number of CUT calls should be also taken into account. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In line 129, the representation $\nabla^2 f$ appears. Does it mean Hessian? or Laplacian? - Between line 197 and 198, the final inequality may be not $\overset{(3)}{\leq}$ but $\overset{(4)}{\leq}$ ? - In the caption of Figure 3(b), $\mathcal{H}\_{>}(\mathbf{u})$ should be replaced by $\mathcal{H}_>(\mathbf{v})$ ? - In Figure 5, do the authors take the number of CUT into account? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: They address the limitation, their method is only supported for convex deterministic setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for engaging with our paper during the review period! Please see the discussion of the overhead of `CUT` (also raised by reivewer JVBS) in the overall response. For the other points: > considering backtracking using preconditioned matrix $\mathbf{P}_t$ would be novel idea. But I'm not an expert of this field, and not so sure on the originality. We are not aware of prior work using similar approaches to estimate a preconditioner, or provide guarantees similar to the ones we present. We emphasize that, beyond the specific algorithm presented, a contribution of our work is the development of a formal definition of adaptive per-coordinate step-sizes and a way to provably find them. As mentioned in the introduction, existing definitions of adaptivity do not capture the benefits of preconditioning, even on simple linear regression problems. For example, the online learning definition used by AdaGrad forces the step-size to go to 0 and leads to poor performance in practice. We believe that these ideas can lead to further work to improve adaptive methods. > In line 129, the representation $\nabla^2 f(x)$ appears. Does it mean Hessian? or Laplacian? $\nabla^2 f$ does refer to the Hessian. We will mention it on first appearance to avoid confusion. > Between line 197 and 198, the final inequality may be not (3) but (4)? The numbers on the display math do not refer to equation numbers (which we assume is the confusion) but to the numbers `(1), (2), (3)` in the preceding paragraph. We will change those to `(a), (b), (c)` to disambiguate. > In the caption of Figure 3(b), $\\mathcal{H}\_{>}(\\mathbf{u})$ should be replaced by $\\mathcal{H}\_{>}(\\mathbf{v})$? Indeed, we will fix it.
Summary: This paper provides a backtracking approach for smooth convex optimization on a per-coordinate basis with a theoretical analysis that show the gain with respect to classical backtracking line-search and that compare to the optimal per-coordinate conditioners. Strengths: This paper is super well written and organized. The contribution is also significant as it is a building block of many problems in machine learning. In general, further improving the "adaptivity" of optimization algorithms is essential to seamlessly apply theoretical results (i.e., optimal per coordinate step sizes) to operational purposes. Weaknesses: The only drawback might be focusing on smooth and strongly convex problems, but it is still a significant first step. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for sharing in our excitement with the paper! We agree that one of the major drawbacks of our technique is the focus on the smooth, strongly-convex case. We do address the PL case as a relaxation of strong-convexity in the Appendix, and hope our work will lead to others exploring relaxations like the convex-only, non-convex, stochastic, and other cases.
Summary: This paper extends backtracking to multi-dimension. The authors propose a cutting plane method to find optimal per-coordinate step-sizes (in other words, to find an optimal preconditioner) for smooth convex optimisation. Experiments on ill-conditioned logistic regression problems show that the proposed algorithm can find good preconditioner and improve over vanilla gradient descent. Strengths: This paper fills a potential gap in the optimization literature by proposing multidimensional backtracking. The proposed method is technically sound and seems to work well in practice. Weaknesses: I do not see any major issues with the paper, except maybe that it is a bit hard to follow and understand (even though the English is good). Maybe because I don't have enough background on the topic. I'm really sorry for the short review. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: None. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for engaging with our paper during the review period! > This paper fills a potential gap in the optimization literature by proposing multidimensional backtracking We emphasize that, beyond the specific algorithm presented, a contribution of our work is the development of a formal definition of adaptive per-coordinate step-sizes and a way to provably find them. As mentioned in the introduction, existing definitions of adaptivity do not capture the benefits of preconditioning, even on simple linear regression problems. For example, the online learning definition used by AdaGrad forces the step-size to go to 0 and leads to poor performance in practice. We believe that these ideas can lead to further work to improve adaptive methods.
Summary: This paper presents a generalized backtracking line-search method, which estimates coordinate-wise stepsizes referred to as 'preconditioner' of gradient descent. Stemmed from the observation that any existing methods do not exceed the performance of backtracking line-search method, this paper designs a generalized backtracking line-search technique which is realized as a cutting plane method, whose separating hyperplane comes from the hypergradient, i.e., gradient with respect to hyperparameter of the algorithm, which is a stepsize in this case. Followed by the worst-case convergence analysis for smooth strongly convex function, the writers also provide experimental results illustrating the competitiveness of this method for ill-conditioned problems and robustness among problem classes. Strengths: Section 4 contains the key insight of this work: that a failed preconditioner (defined as one that violates a Armijo-type condition) provides a cutting plane on the set of valid preconditioners. This is a very nice idea that is, as far as I know, novel, and I expect this work to lead to a lot of follow-up work. This is a new type of result and I think it is valuable. Weaknesses: . Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The 'CUT' subroutine, which is a 'backtracking' phase of this algorithm, can be called up to the number of iterations linear in dimension $d$, which can be large in considering large-scale problems. It has been illustrated in the experimental result that for large problems it recovers preconditioner quite fast, but I'm also curious on how much total overhead is caused by the subroutine `CUT'. (p.3 line 121) It seems there is a typo on notation regarding $d, n, \\alpha$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for sharing in our excitement with the paper! Please see the discussion of the overhead of `CUT` (also raised by reivewer eKZX) in the overall response. Thanks you for spotting the typo on l.121, a sentence got eaten due to a version conflict.
Rebuttal 1: Rebuttal: We thank all reviewers for engaging with our paper. We were very pleased to see reviewers JVBS and hZtF sharing in our excitement with the paper and appreciate the great feedback. We appreciate that reviewers 5i17 and eKZX truly engaged with our paper despite it being outside of their area of expertise. If the reviewers have more feedback about which parts would benefit from more exposition to improve the presentation reach a broader audience, we would appreciate specific recommendations. This feedback would be valuable for future expositions of our work. ### On the overhead of CUT Reviewers eKZX and JVBS both asked for more details on the overhead of backtracking and the `CUT` operation. The results in Figure 1 and 5 account for the overhead of backtracking by showing the number of gradient evaluations. The majority of the computation cost of a backtracking step comes from computing the gradient at next point to compute the (hyper-)gradient. This is why our algorithm does not make progress at the start, as the first gradient evaluations are spent on backtracking. The overhead of `CUT` is minimal compared to gradient computations as it only involves a few vector operations (see Figure 11 in Appendix A for the pseudocode). Even solving for the best convex combination numerically (see lines 287-288 or Appendix D) is faster than computing gradients. The table below gives running times fors parts of a backtracking update on RCV1 for the Ellipsoid version (average runtime over 100 calls, ±std over 10 repeats). | Operation | Average runtime | ±std | |--------------------------------------------------------------------------|-----------------|---------| | Compute gradient, preconditioner and next iterate | 24.4 ms | ±0.2 ms | | Compute hypergradient | 12.6 ms | ±0.1 ms | | Compute `CUT` (using Lemma 5.2) | 0.9 ms | ±0.1 ms | | Compute `CUT` (using `scipy.optimize` to find the min. volume ellipsoid) | 7.6 ms | ±0.1 ms |
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
Accept (poster)
Summary: This paper studies the actor-critic approach for robust RL. Especially, a Double-Sampling Uncertainty Set and an Integral Probability Metric Uncertainty Set are developed to overcome the curse of problem scale. A robust natural Actor-Critic algorithm is then proposed with convergence results. A significant number of experiments are designed. Strengths: The paper is well-written and clear in general. The design of the two uncertainty sets is new and novel, which shows advantages under large-scale problems. Weaknesses: 1. The motivation for designing the two uncertainty sets is somehow unclear to me. I understand there is uncertainty in the sets designed, but don't understand the motivation of this uncertainty set. In lines 640-646, the authors explain the uncertainty set contains transition kernels that are perturbed from the uniform distribution, this explanation seems unclear to me, and I can't understand the motivation for such a definition. 2. One of the critical problems in studying robust RL with function approximation is the contraction of the approximated Bellman operator. The approach used in this paper is similar to the previous ones, i.e., use conditions on the radius of the uncertainty set and discount factor. This hence reduces the novelty and contribution of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you explain the motivation for the designing of the uncertainty sets? In what sense do they imply robustness and uncertainty? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See the parts above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are encouraged by the reviewer's comments that the paper is well-written, and that the designed uncertainty sets are novel and show advantages for large-scale problems. Below, we give a detailed response to your comments. We believe that we have addressed all your concerns, and we sincerely hope that the reviewer would consider increasing their score. Please note that line numbers are based on the supplementary material (RNAC-full.pdf). **Q1.** "The motivation for designing the two uncertainty sets is somehow unclear..." **Response:** From the perspective of the development of robust RL in the literature, previous works have considered multiple types of uncertainty sets, such as KL uncertainty set, Total-Variation uncertainty set, and Chi-square uncertainty set. However, as we have discussed in Lines 42-49 and Appendix B, these uncertainty sets do not scale up for function approximation in large state spaces. This motivates the design of tractable uncertainty sets for large state spaces. The tractability is provided by the computationally efficient empirical robust Bellman operators of the designed uncertainty sets (Eq (3) and Eq (6)). The Double-Sampling (DS) uncertainty set (cf. Lines 134-140 and 640-646) can be motivated and explained as follows. In canonical RL, the next-step state $s'$ sampled from the nominal model, i.e., $s' \sim p^0_{s, a}$ can be viewed equivalently as a double-sampling process: first generate $m$ states $s_1', s_2', \ldots, s_m'$ sampled i.i.d. according to $p^0_{s, a}$ , and then uniformly select one from these $m$ states as the next-step state $s'$. Double sampling gets its name from these two phases of sampling. With this interpretation, we can thus perturb the nominal transition $p^0_{s, a}$ by perturbing the second phase sampling from uniform selection. We let the selection distribution be $\alpha \in \Delta(m)$ and allow it to deviate from the uniform distribution $\text{Unif}(m)$ as $\mathrm{d}(\alpha, \text{Unif}(m)) \leq \delta$. Due to this perturbation of the second phase of the double sampling process ($\alpha$ can depend on the generated $m$ states of the first phase), the induced next-step state $s'$ will not follow the nominal kernel $p^0_{s, a}$ but a new kernel that lies in the uncertainty set. The DS uncertainty set is implicitly defined as the set of all such new kernels, which is determined by $m, \mathrm{d}(\cdot, \cdot), \delta$. IPM is a general class of divergence measures that contains many metrics as special cases for different function classes (cf. Lines 158-160). IPM takes advantage of the prior information in the function class, which is helpful when considering RL with a large state space under function approximation. We take a function class based on the feature matrix Eq. (5), and the designed IPM-based uncertainty set based on this function class takes advantage of the underlining structure of the value function approximation. Please let us know if the above discussion is clearer for motivating the design of the uncertainty sets. If so, we will add the discussion in the revision. **Q2.** "...The approach used in this paper is similar to the previous ones, i.e., use conditions on the radius of the uncertainty set and discount factor. This hence reduces the novelty and contribution of the paper." **Response:** While we acknowledge the partial adoption of the approach in [47] by Assumption 2 for the DS uncertainty set (Proposition 2), it is essential to emphasize that [47] does not offer any specific uncertainty set satisfying Assumption 2. Our paper introduces technical novelty by demonstrating that the commonly considered f-divergence may violate Assumption 2 (Proposition 3). Additionally, for the IPM uncertainty sets, their contraction property shown in Lemma 1 is independent of the previous approach. Furthermore, a large volume of the analysis in the paper is regarding the convergence of the proposed algorithm (e.g., Theorems 1-4 in the main paper), which also constitutes a significant technical contribution. The fact that the proof of Proposition 2 partly leverages a previous approach diminishes neither the overall novelty and contribution of the paper, nor the technical contribution. In addition, the major novelty of the paper can be attributed to the design of the new uncertainty sets and the robust policy-based algorithm. The major contributions, besides the novel uncertainty sets and algorithms, are both theoretical -- convergence guarantees under function approximation for robust RL, as well as empirical -- extensive experiments in Mujoco simulation and real-world TurtleBots suggesting robust behavior of the proposed algorithm. By providing guarantees we have successfully closed an important gap in the literature on a policy-based approach for robust RL under function approximation. [47] Aviv Tamar, Shie Mannor, and Huan Xu. Scaling up robust mdps using function approximation. In International conference on machine learning, pages 181–189. PMLR, 2014. **Q3.** "In what sense do they imply robustness and uncertainty?" **Response:** By the definition of the uncertainty set for the Robust MDP (RMDP, cf. Line 103-119), the results establish robustness in the sense that the optimal policy of the RMDP achieves the best performance for the worst transition model in the uncertainty set. Equipped with the newly designed DS or IPM uncertainty sets, the proposed algorithm aims to find the optimal policy of the corresponding RMDP. We have demonstrated that the proposed algorithm with either DS or IPM uncertainty sets results in robust policy, in the sense that the learned policies have more stable performances in perturbed environments, as shown in Figures 1-2 in Section 7, Figures 4-5 in Appendix A, and Figures 1-2 in the attached new pdf. --- Rebuttal Comment 1.1: Comment: The response solves my concerns, and I hence increase my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for carefully reading our rebuttal and raising the score. We will add the above clarification in our final version.
Summary: This paper tackles robust reinforcement learning in large state spaces, where the transition kernel is accessible only in a nominal setting. The authors demonstrate that the $f$-divergence, R-contamination, and the $l_{p}$ norm are computationally infeasible in the context of robust RL for large state spaces. To overcome this limitation, the authors propose two new tractable uncertainty set formulations suitable for large dimensions: double sampling (DS) and the integral probability metric (IPM). DS involves independently and identically distributed state sampling following a transition kernel $p^o_{s,a}$. The IPM corresponds to the robust Bellman operator, but with a regularization term on the norm of weights. Both approaches are enabling Robust RL in scenarios previously hindered by computational complexity. The paper introduces a new algorithm, the Robust Natural Actor Critic (RNAC), for training both a critic and an actor for the proposed IPM and double sampling uncertainty sets. The authors provide convergence guarantees for the RNAC algorithm. Finally, the authors demonstrate the efficency of their approach via two applications: one involving the suite of MuJoCo environments and the other, a real-world robotics application. These practical applications lend credence to the theoretical contributions and the robustness of the proposed approach. Strengths: - The authors provide two straightforward and computationally feasible uncertainty set formulations for large state spaces. - The paper offers substantial theoretical contributions. - The real-world robotics application lends credibility to the paper's robustness claims. Weaknesses: - The convergence guarantees are valid for $(s,a)$-rectangular uncertainty sets, which is rather limiting. However, it is commendable that the authors have acknowledged this limitation in their work and suggested it as an avenue for future research. - The paper seems incomplete without the appendix. The need to constantly refer to the appendix disrupts the flow of reading. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - In lines 175-177, why did you not apply regularization to the bias parameter in the last layer? Moreover, this seems inconsistent with the provided implementation (lines 201-206 in the `RNAC-ppo/ppo_continious.py` file), where regularization is applied to all the critic's layers. - Doesn't the IPM's proposed regularization ultimately reduce the Lipschitz constant of the learned function, making the agent less sensitive to state variations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors mention their limitations as future research directions. We appreciate the authors for being upfront. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the clear summary of the paper, finding our theoretical results of substantial contribution and our empirical evaluations credible. **Q1.** "The convergence guarantees are valid for $(s,a)$-rectangular uncertainty sets, which is rather limiting. However, it is commendable that the authors have acknowledged this limitation in their work and suggested it as an avenue for future research." **Response:** As the reviewer noted, the $(s,a)$-rectangularity assumption is now a standard assumption used in the robust MDP/RL literature that can yield tractable theoretical analysis. We completely agree with the reviewer that this assumption is indeed a limitation of the existing literature, including ours. We believe alleviating this limitation while maintaining theoretical tractability is an essential endeavor for future research in robust RL. **Q2.** "The paper seems incomplete without the appendix. The need to constantly refer to the appendix disrupts the flow of reading." **Response:** We thank the reviewer's valuable feedback. We refer to the appendix for detailed or technical discussions in several places in the main paper due to space limits. To eliminate the disruptions of the flow, we will not refer to the appendix at each different place in the main paper, but we will simply add one remark discussing the appendix and its supporting relation to the sections all in one place. We would be glad to take action on any further suggestions from the reviewer to make our presentation better. **Q3.** "In lines 175-177, why did you not apply regularization to the bias parameter in the last layer? Moreover, this seems inconsistent with the provided implementation (lines 201-206 in the RNAC-ppo/ppo\_continious.py file), where regularization is applied to all the critic's layers." **Response:** It is theoretically proved in Proposition 1 that under the designed IPM uncertainty set, the regularization does not include the bias parameter. The regularization without bias parameter is thus an implication of the structure of the designed IPM uncertainty set. We thank the reviewer's careful examination of our code. The theory is consistent with our implementation, where in lines 201-206 of RNAC-ppo/ppo\_continous.py file, the last layer's bias term is not included in the regularization, cf. bias\_norm[0:-1] in line 206. **Q4.** "Doesn't the IPM's proposed regularization ultimately reduce the Lipschitz constant of the learned function, making the agent less sensitive to state variations?" **Response:** We agree with the reviewer's intuition that this regularization has the effect of reducing the Lipschitz constant of the value function, and thus the learned value function is less sensitive with respect to the state variations. This intuition can also corroborate the theoretical finding of regularization without bias term in Proposition 1, since the bias term corresponds to the vertical shift of the value function and has no impact on the Lipschitz constant of the value function. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: The rebuttal addressed my concerns. I'll keep my rating. --- Reply to Comment 1.1.1: Comment: We are pleased that our rebuttal has addressed the reviewer's concerns. We greatly appreciate the reviewer's recognition of our work.
Summary: This paper studies the sim-to-real transfer problem. It extends the learning of a robust policy by using the framework of robust Markov decision processes (RMDPs). It extends this paradigm to large state and action spaces using two uncertainty set formulations: double sampling, and integral probability metric. These formulations are then used in the proposed algorithm robust natural actor critic (RNAC). RNAC is tested in MuJoCo as well as on a real robot. Strengths: * Proposed uncertainty sets as well as the RNAC algorithm seem like practical steps forward in sim-to-real transfer with robustness approaches. * The theoretical analysis seems interesting Weaknesses: * The experiments compare only to PPO. They should compare to other sim-to-real methods such as dynamics randomization [1] or action noise envelope [2]. * The related works being relegated to the appendix seems like a red flag. The authors should better organize the paper to include related works and comparisons to the main paper. ### References [1] Peng, X.B., Andrychowicz, M., Zaremba, W. and Abbeel, P., 2018, May. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 3803-3810). IEEE. [2] Jakobi, N., Husbands, P. and Harvey, I., 1995. Noise and the reality gap: The use of simulation in evolutionary robotics. In Advances in Artificial Life: Third European Conference on Artificial Life Granada, Spain, June 4–6, 1995 Proceedings 3 (pp. 704-720). Springer Berlin Heidelberg. ================================ The additional experiments alleviate my concerns about the experimental analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In the second to last paragraph of Section 2, when describing the single worst-case kernel, is this kernel constant for the entire state space or varying? * In the double sampling setup, does $\sum \alpha = 1$ in Equation (3)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your comments and suggestions. We are encouraged by the fact that the reviewer finds that our paper takes "practical steps forward in sim-to-real transfer with robustness approaches" and has "interesting theoretical analysis". Please see our response below with respect to the specific comments. Please note that line numbers are based on the supplementary material (RNAC-full.pdf). We believe that we have addressed all the concerns, and we sincerely hope that the reviewer would consider increasing the score. **Q1.** "The experiments compare only to PPO. They should compare to other sim-to-real methods such as dynamics randomization or action noise envelope." **Response:** We compare the proposed algorithm with the reviewer's suggested baselines -- "dynamics randomization" and "action noise envelope" for MuJoco Envs and TurtleBot experiment. Please note that in Appendix A.3, we have in fact compared the proposed RNAC algorithms with soft actor-critic and soft-robust PPO algorithms in MuJoCo, and indeed reported results in Figure 5 with detailed explanation in Lines 557-582. The core idea of the soft robust algorithm is the same as "dynamics randomization", where the agent learns an optimal policy based on a distribution over an uncertainty set instead of considering the worst-case scenario. We add "action noise envelope" (Gaussian noise $\mathcal{N}(0, 0.05)$) as an additional baseline of Figure 5 in the attached pdf. In Figure 1 (attached new pdf), we also observe that the cumulative reward of RNAC-PPO decays much slower compared with action noise envelope, which further demonstrates the robust performance of the proposed algorithms. Additionally, based on the reviewer's suggestion, we have added two more baselines (i.e., "dynamics randomization" and "action noise envelope") to demonstrate the robustness of the proposed methods in the real-world TurtleBot environment. As shown in Figure 2 (attached new pdf), the proposed algorithms RNAC-PPO enjoy higher target-reaching rates (100\%) compared with action noise envelope (67.5\%), dynamic randomization (9\%), and PPO (0\%) under testing perturbed environments, which illustrates the robustness of the RNAC-PPO algorithms. Please see Lines 330-345 and Lines 584-600 for details of the TurtleBot environments. **Please also see Authors' Response to All.** **Q2.** "The related works being relegated to the appendix seems like a red flag. The authors should better organize the paper to include related works and comparisons to the main paper." **Response:** We have in fact discussed the most important related works quite extensively in the introduction. Please note that lines 30-66 are indeed a survey of related works, where almost all references have been mentioned. We leave the additional discussion of the technical sides of these works, e.g., convergence rate or sample complexity, to Appendix G, mostly due to space constraints. However, to avoid confusion, we will add a "related work" subsection under the introduction section in our revision. **Q3.** "In the second to last paragraph of Section 2, when describing the single worst-case kernel, is this kernel constant for the entire state space or varying?" **Response:** As adopted in all previous works in reinforcement learning literature, we refer to a map $p: \mathcal{S} \times \mathcal{A} \rightarrow \Delta_{\mathcal{S}}$ as a transition kernel. For each current state $s$ and current action $a$, the worst-case kernel $p(s'| s, a)$ is the probability that the next state is $s'$. It does depend on $s$. The second to the last paragraph of Section 2 summarizes the existing results on robust MDP in the literature [18, 37], so as to lay the foundation of robust RL. The major claim in the robust MDP with $(s,a)$-rectangular uncertainty set is that the optimal policy is stationary (not time-varying) and the worst-case transition for any stationary policy is also stationary. [18] Garud N Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257–280, 2005. [37] Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780–798, 2005. **Q4.** "In the double sampling setup, does $\sum \alpha =1$ in Equation (3)?" **Response:** Yes, in Eq (3), $\sum_{i=1}^m \alpha_i = 1$ since $\alpha \in \Delta_{[m]}$, where $\Delta_{[m]}$ is a probability simplex, i.e., $\Delta_{[m]} = \{ \beta \in \mathbb{R}^m: \sum_{i=1}^m \beta_{i} = 1, \beta_i \geq 0, \forall~i\}$. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for their response and for the additional experiments taking into account the suggestions put forth in the review. These experiments now certainly showcase the effectiveness of the proposed technique better. I also thank the authors for answering the questions in the review. The related works could be better presented, but the authors' possible modifications might make the comparison sufficient. With these points, I will be revising my score. --- Reply to Comment 1.1.1: Comment: We are pleased that our rebuttal addresses the reviewer's concerns. We thank the reviewer for the valuable suggestions and recognition of our work. We would be glad to take action on any further suggestions from the reviewer to make our presentation better.
Summary: RL methods trained on simulators suffered from generalization problems because of the "simulation-to-reality-gap". Previous works proposed robust RL methods in a tabular setting, with limited search spaces. The paper aims to develop a computationally tractable robust RL algorithm with large search spaces. To this end, the paper proposed two novel uncertainty sets and the first policy-based approach for robust RL with provable convergence guarantees. Strengths: The paper studies a critical problem. Several technical contributions are proposed to devise a robust policy-based RL method with a large search space. The theoretical analysis seems solid. Weaknesses: The paper aims to devise a robust RL method. More real-world experiments are expected to demonstrate the robustness of the proposed methods. --------------------- After reading the rebuttal, my main concerns were addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can previous tabulated-based robust RL methods be applied to the experiments in Sec. 7? If so, what about the comparison results? On the other hand, can the author provide some experiments to directly compare the proposed method with large search space to the previous one with limited search space to demonstrate the benefits of the proposed method? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has discussed its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. We are encouraged by the fact that the reviewer finds that our paper "studies a critical problem" and provides "solid theoretical analysis". Please see our response below with respect to the specific comments. Please note that line numbers are based on the supplementary material (RNAC-full.pdf). We believe that we have addressed all the concerns, and we sincerely hope that the reviewer would consider increasing the score. **Q1.** "More real-world experiments are expected to demonstrate the robustness of the proposed methods." **Response:** We would like to emphasize that our paper is primarily a theoretical work that develops large-scale robust RL algorithms with provable convergence guarantees, the first of their kind to the best of our knowledge. We have also included extensive simulations of our RNAC algorithms on MuJoCo environments (Hopper-v3, Walker2d-v3, and HalfCheetach-v3) and have demonstrated their superior performance compared to many benchmarks. Significantly different from similar theoretical works on RL algorithms which limit the evaluations only to simulation experiments, we have gone one step further and demonstrated the effectiveness of our RNAC algorithm on a real-world mobile robot. **We have included the video of this real-world robot demonstration**, see Section 7.2 and the supplementary files. Now, based on your suggestion (also suggested by Reviewer haxo), we have added the results from additional MuJoCo and TurtleBot environments to illustrate the superior performance and effectiveness of our RNAC algorithm. **Please see Authors' Response to All for details**. We sincerely hope that these additional experiments will alleviate the reviewer's concerns. **Q2.** "Can previous tabulated-based robust RL methods be applied to the experiments in Sec. 7? Can the authors provide some experiments to directly compare the proposed method with large search space to the previous one with limited search space to demonstrate the benefits of the proposed method?" **Response:** We thank the reviewer for this suggestion. In Section 7, we run the proposed algorithm and non-robust baselines in large-scale MuJoCo (Hopper-v3, Walker2d-v3, and HalfCheetah-v3) and TurtleBot experiments with continuous state spaces and continuous action spaces. The uncertainty sets studied in previous works cannot scale up as discussed in Lines 42-49 and Appendix B, which makes them inapplicable to the experiments in Section 7. (Previous tabular-based papers show experimental results only in the tabular setting, as in [25, 57].) Therefore, new *large-scale* robust RL algorithms with corresponding convergence guarantees and superior empirical results are required, which is the major motivation of this work. [25] Navdeep Kumar, Esther Derman, Matthieu Geist, Kfir Levy, and Shie Mannor. Policy gradient for s-rectangular robust Markov decision processes. arXiv preprint arXiv:2301.13589, 2023. [57] Yue Wang and Shaofeng Zou. Policy gradient method for robust reinforcement learning. In International Conference on Machine Learning, pages 23484–23526. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed my main concerns. I'll keep my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the valuable comments on experimental evaluation and we are pleased to know that our rebuttal addresses the reviewer's concerns. Please let us know if you have any further questions. We will be happy to answer them. If you find our response satisfying, we wonder if you could kindly consider raising the score rating of our work?
Rebuttal 1: Rebuttal: ## Authors' Response to All We wholeheartedly thank all reviewers for their time and their constructive feedback on our paper. As suggested by Reviewers Fq66 and haxo, **we have added additional MuJoCo and real-world TurtleBot experiments** in the attached new pdf. As suggested by Reviewer haxo, we have added more baselines (i.e., "dynamics randomization" and "action noise envelope") to demonstrate the robustness of the proposed methods in the real-world TurtleBot environment. As shown in Figure 2 (in the attached new pdf), the proposed algorithm RNAC-PPO enjoys higher target reaching rates (100\%), compared with action noise envelope (67.5\%), dynamic randomization (9\%), and PPO (0\%), under perturbed testing environments, which illustrates the robustness of the RNAC-PPO algorithm. Please see Lines 330-345 and Lines 584-600 for details of the TurtleBot environments. We have also added one more baseline (i.e., "action noise envelope") to illustrate the robustness of the proposed methods in MuJoCo environments. In Figure 5 (original paper), we have already compared the proposed RNAC-PPO with PPO, soft actor-critic, and soft-robust PPO (same idea as dynamics randomization). Please see Lines 557-582 for a detailed description of the robust performance of RNAC-PPO. In Figure 1 in the attached new pdf, we also observe that the cumulative reward of RNAC-PPO decays much slower compared with action noise envelope, which further demonstrates the robust performance of the proposed algorithms. *This paper closes an important gap in the policy-based approaches for robust RL under **function approximation** with **theoretical guarantee**.* Though a theory-driven paper with two novel uncertainty set designs and a robust policy-based algorithm with convergence guarantee, extensive experiments were conducted in the paper and more baselines are included during the rebuttal that lend credence to the theoretical contribution. We also hope that our detailed response will convince the reviewers of the value of our work and they will consider increasing their evaluations accordingly. Pdf: /pdf/5442871d52f22d38ae7dbba56e79dd20bbed67d9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Federated Learning with Bilateral Curation for Partially Class-Disjoint Data
Accept (poster)
Summary: This paper addresses a challenge in Federated Learning referred to as partially class-disjoint data (PCDD), where each client contributes a part of classes (instead of all classes) of samples. Without full classes, the local objective will contradict the global objective, yielding the angle collapse problem for locally missing classes and the space waste problem for locally existing classes. This is a real-world challenge since it is not uncommon for example that some classes will be well sampled in certain regions, but not others. Prior art mainly focus on the general heterogeneity without specially considering partially class disjoint challenges. Without full classes, the local objective will contradict the global objective, yielding the angle collapse for locally missing classes and the waste of space for locally existing classes. The goal is to achieve holistic improvement in the bilateral views (both global view and local view) of federated learning. The authors propose FedGELA where the classifier is globally fixed as a simplex ETF while locally adapted to the personal distributions. Globally, FedGELA provides fair and equal discrimination for all classes and avoids inaccurate updates of the classifier, while locally it utilizes the space of locally missing classes for locally existing classes. The proposed approach builds upon simplex equiangular tight frame (ETF), which provides each class the same classification angle and generalizes well on imbalanced data. Specifically, in their FedGELA approach, the classifier is globally fixed as a simplex ETF while locally adapted based on the local distribution matrix to utilize the wasted space for the existing classes. In the global view, FedGELA merges class features and their corresponding classifier vectors, which converge to ETF. In the local view. it provides existing major classes with larger feature spaces and encourages to utilize the spaces wasted by locally missing classes. Contributions are summarized as : - Study algorithmic implication of a real-world challenge (partially class-disjoint data (PCDD), namely angle collapse and space waste - Propose FedGELA and theoretically show the local and global convergence analysis for PCDD with the experimental verification - Evaluate on multiple benchmark datasets under the PCDD case and a real-world dataset to demonstrate the bilateral advantages of FedGELA over the state of the art methods. Strengths: 1. Related work well covers comparison among a range of FL methods and why PCDD not covered. Examples of why prior art does not address PCDD include: generic federated leaning adopt a uniform treatment of all classes, then attempt mitigate personal differences; personalized federated learning places less emphasis on locally missing classes and selectively shares parameters/prototypes to minimize the impact of personal characteristics. While these methods might directly or indirectly help mitigate the data shifts caused by PCDD, neither achieve holistic improvement for global and local views 2. Performance evaluaton on 3 relevant datasets (SVHN, CIFAR10, CIFAR100), against all the top state of the art algorithms as baselines, and showng it outperforms. FedGELA consistently exceeds all baselines. Weaknesses: Overall, the paper represents is a solid contribution - well defined problem not addressed by prior art and representative of real-world problem for fedrated learing. Solid treatment of prior art, and differentiation from prior methods. Weaknesses: 1. Only 1 real PCDD federated application Fed-ISIC2019 was evaluated - however I am not aware of other benchmarks I would recommend. 2. Performance improvements against to the best baseline for all tests were all <3% performance improvement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Given the performance improvements were in the <3% against the best baselines - can you provide more information? For example does it perform better worse under certain conditions? if so, pls distinguish, and if possible explain what this may imply about either the algorithm and limitations, or perhaps a limitation in the benchmark wrt fully characterizing the real world challenge? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative societal effects. Limitations explained in Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We really appreciate your positive support and the constructive comments. In the following, we provide the detailed response and hope that can address your concerns. Let W, Q and A denote the shorthand of Weaknesses, Question and Answer respectively.** > **W1:** Only 1 real PCDD federated application Fed-ISIC2019 was evaluated - however I am not aware of other benchmarks I would recommend. **A:** To address the reviewer's concern, we **test FedGELA on FEMNIST[1] and SHAKESPEARE[2] (two datasets also satisfy the PCDD setting)** compared with all related approches in the paper. FEMNIST includes complex 62-class handwriting images from 3500 clients and SHAKESPEARE is a next word prediction task with 1129 clients. Most of the clients only have a subset of class samples. With help of LEAF[3], we choose 50 clients of each dataset into federation and in each round we randomly select 10 clients into training. The total round is set to 20 and the model structure is a simple CNN and 2 layer of LSTM for FEMNIST and SHAKESPEARE, respectively. It can be seen **in the following table**, **our method achieves best results of both personal and generic performance on all three real-world challenges**. | Dataset | split | FedAvg | Best Baseline | FedGELA | | --- | --- | --- | --- | --- | | SHAKESPEARE | PA | 49.56 **+4.07** | 51.66 **+1.97** | **53.63** | | | GA | 44.53 **+3.86** | 47.29 **+1.10** | **48.39** | | FEMNIST | PA | 67.02 **+4.82** | 69.54 **+2.3** | **71.84** | | | GA | 59.54 **+2.54** | 61.22 **+0.86** | **62.08** | | FedISIC-2019 | PA | 77.27 **+2.00** | 78.91 **+0.36** | **79.27** | | | GA | 73.59 **+2.26** | 74.98 **+0.96** | **75.85** | | [1]EMNIST: Extending MNIST to handwritten letters [2]William Shakespeare: the complete works [3]Leaf: A benchmark for federated settings > **W2 and Q1:** Performance improvements against to the best baseline for all tests were all <3% performance improvement. Given the performance improvements were in the <3% against the best baselines - can you provide more information? For example does it perform better worse under certain conditions? if so, pls distinguish, and if possible explain what this may imply about either the algorithm and limitations, or perhaps a limitation in the benchmark wrt fully characterizing the real world challenge? **A:** Thank you for the suggestion. We explain this in the following two points. 1)**Significant Improvement on Pure PCDD settings.** For the consideration of practical data distribution and cohesion with most previous works, we use dirichlet distribution to split the dataset, which will generate heterogeneity data more than PCDD and might limit the potential better improvement. To further address the reviewer's concern, **we decouple the PCDD setting and the ordinary heterogeneity(Non-PCDD), and conduct the corresponding experiments on pure PCDD situations.** In the following table, we use PxCy to denote the dataset is divided in to x clients with y classes of samples, and in each round, 10 clients are selected into federated training. The training round is 100. According to the experimental results, we can see that our FedGELA achieves **significant improvement especially 18.56% to FedAvg and 10.04% to the best baseline** on CIFAR10(P50C2). | Dataset(Split) | Metric | FedAvg | Best Baseline | FedGELA | | --- | --- | --- | --- | --- | | CIFAR10(P10C2) | PA | 92.08 **+3.76** | 94.07 **+1.77** | **95.84** | | | GA | 47.26 **+12.34** | 52.02 **+7.58** | **59.60** | | CIFAR10(P50C2) | PA | 91.74 **+3.68** | 93.22 **+2.20** | **95.42** | | | GA | 36.22 **+18.56** | 44.74 **+10.04** | **54.78** | | SVHN(P10C2) | PA | 95.64 **+3.11** | 97.02 **+1.73** | **98.75** | | | GA | 69.34 **+14.22** | 76.06 **+7.5** | **83.56** | | SVHN(P50C2) | PA | 94.87 **+3.5** | 96.88 **+1.49** | **98.37** | | | GA | 66.94 **+10.24** | 72.97 **+4.21** | **77.18** | | 2)Regarding the potential limitations, we can think that curating the structure of the last layer really builds on top of the power deep neural networks, whose fitting ability is sufficiently powerful. Thus regularzing structure of the classification layer does not hurt the overall model capacity too much, and promote the training calibration under PCDD. But when it comes to the shalow models, regularizing the structure of the classification layer might be too harsh and might hurt the training. Another point that is worth exploring in the future is that in the self-supervised learning, it might not contain the label information avaliable and how to curation the model under the PCDD scenario to promote the training remains unknown. **We appreciate the reviewer's advice and will include the discussion of this question into the submission for better clarity and future explorations.** --- Rebuttal Comment 1.1: Comment: I have read the authors rebuttal and rebuttals to some of the other reviewers. The authors' rebuttal addresses well the points raised in my review, and are recommended for inclusion in the final submission, if accepted. I do not have further questions for the authors. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the positive support of the reviewer. We will carefully follow your constructive comments and include the corresponding contents in the revision to improve the submission. Best, The Author of Submission7489
Summary: This paper mainly focuses on the partially class-disjoint data (PCDD) problem in federated learning (FL) settings, which is a common yet challenging problem in distributed data sources. Inspired by a classifier structure (simplex equiangular tight frame, ETF), the authors of the paper propose FedGELA to tackle the PCDD problem. FedGELA is a variant of FedAvg with local model adaptation (personalization): They first define the classifier $W$, which is the ETF that the classifier should converge to. Here, they also take the client local data distribution ($\phi$) into account. Afterwards, the feature extractor $H$ will be optimized locally at each client and communicated via server-client communication. Finally, the global feature extractor and the $W$ at central server, as well as the local feature extractors and the adapted $W$ at clients will be returned. Strengths: The proposed method is motivated very well. The schematic illustration is also clear. The theoretical analysis is sound. Experiments and the results are good. Weaknesses: From my understanding, FedGELA focuses only on the alignment in the feature embedding space, which has been done by many previous works (FedGen [1], FedProto [2], …). Therefore this paper lacks significance to some extent. Also, the definition of $W$ (ETF, the global classifier), as well as the locally adapted ones looks straightforward. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. You claim that FedGELA could mitigate PCDD from both Global and Local view. But in Algorithm 1, the global model $H^T$ is simply an average of the client model and there is no server-side optimization. Could you please explain this? 2. In Table 1, you claim that FedGELA could mitigate the model skew, could you please explain this point in more details? Also, in terms of “save space”, since you are transmitting the whole feature backbone $H$, which is the resnet18 in your experiments, what is the reason of space saving? 3. Why do you use $H$ to represents 2 different terms, features (Line 114) and global backbone (in the Algorithm)? It’s a bit confusing during reading. 4. Could you shortly explain the converged angles in your figures? Is there a specific meaning of the values? E.g. in Figure 2. 5. In Equation 4, you model the client data skew in the label space via $\phi$, which is based on the number of samples from different classes. Have you experimented with other options? This looks a bit too straightforward to me. 6. FedGen[1] is a method which augment the feature embedding space using a shared feature generator, which could possibly mitigate the issue of “waste of space” by generating synthetic embeddings from minority classes. Could you provide a comparison with this work? [1] Zhu, Zhuangdi, Junyuan Hong, and Jiayu Zhou. "Data-free knowledge distillation for heterogeneous federated learning." ICML, 2021. [2] Tan, Yue, et al. "Fedproto: Federated prototype learning across heterogeneous clients." AAAI 2022. Minors: 1. Line 115, $E_W$ and $E_H$ should be introduced at their first appearance. 2. Line 298, the selection of $E_W$ seems to be dataset specific, is there any default values suggestions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: FedGELA is only tested in the client data with label skew. Is it also applicable to the data with feature skew? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We really appreciate your constructive comments. Regarding the questions from the reviewer, we provide detailed response as below, and hope that can address your concern. Let's use Q as a shorthand for Question.** **Weakness and Q6:** 1)**Technical Innovation.** We would like to kindly argue that our core contributions lie in identifying the practical yet under-explored PCDD problem and proposing a **bilateral curation** method in principle to combat the challenges. Why we need to **curate the local classifier instead of keeping all classifiers same is not studied under PCDD**, and we give the contraction analysis in Eq. (2) and (3). Besides, such a non-parameteric bilateral curation is not straightforward, as the form in the **local curation is not given in previous study and Eq. (4) must insure that global and local convergence hold simultaneously**(See the reply of Q5). Other forms might not enjoy such a theoretical merit. 2)**Comparison to FedGen.** **FedGELA focuses on bilateral curation in the parameter space, instead of embedding space.** Although parameter space and embedding space might follow the same spirit, in the embedding space, we need to upload prototypes like FedProto or train generator and generate features in each round like FedGen. But our bilateral ETFs reach consensus in advance and can be guaranteed in principle without the transmission of personal classifiers and burden on local training as shown in Algorithm 1 in Page 5. Note that, the calculation of $\Phi$ is negligible. Empirically, FedProto is included in the paper and here we provide the comparision to FedGen on SVHN in the following table. **We will add FedGen as baseline into the submission**. ||Partition|Metric|FedAvg|FedGen|FedGELA| |---|---|---|---|---|---| | Full Parti.(10 clients) |IID|PA|93.01|94.02|94.84| |||GA|92.61|93.99|94.66| ||$\beta=0.5$|PA|93.95|94.47|96.27| |||GA|91.24|92.66|93.66| ||$\beta=0.1$|PA|98.10|98.22|98.52| |||GA|75.24|76.51|78.88| |Partial Parti.(50 clients)|IID|PA|91.44|91.47|94.68| |||GA|91.29|91.33|93.59| ||$\beta=0.5$|PA|92.70|93.67|95.54| |||GA|89.29|91.35|93.29| ||$\beta=0.2$|PA|95.31|95.77|96.85| |||GA|84.70|87.59|89.58| | **Q1:** We kindly point out that the global and local views mean FedGELA can improve the performance of both the global model and the local models. It does not suggest that FedGELA has both client-side and server-side optimization. **Q2:** As described in the caption of Table 1, 1) model skew is the bias of local model and global model to their optimal weights. We mitigate this by the curation on the classification heads; 2) "save space" here means saving locally wasted space instead of the storage size of the model. We will emphasize these points in the caption to avoid misunderstanding. **Q3:** $H$ follows the notation in Layer Peeled Model[1] to better retrospect previous theory about ETF. We specially correct this notation for the global backbone in Line 122. We apologize that we haven't highlighted this point and will improve it to avoid the confusion. **Q4:** In Figure 2, the angle converges to a proper value, meaning that the optimal separation structure is approaching. The value of the converged angle reflects the mean seperation between different classes when the algorithm converges. **Q5:** The selection of $\Phi_k$ should reflect personal class distribution and satisfy a basic rule for federated learning, wherein the aggregation of local classifiers aligns with the global classifier(line 168 in the paper), thereby ensuring the validity of theoretical analyses from both global and local views. The convergence requirement can be rewrite as $\gamma\sum_k p_kQ_k(\frac{n_{k,c}}{n_k})=1$, where $\gamma$ is the scaling constant and $Q_k(\frac{n_{k,c}}{n_k})$ denotes the potential way to select $\Phi$. It is highly preferable for the selection process to avoid transmitting $Q_k(\frac{n_{k,c}}{n_k})$, which might induce additional privacy risks. There are indeed many ways to select $\Phi_k$. **However, setting $Q_k(\frac{n_{k,c}}{n_k})=\frac{n_{k,c}}{n_k}$  and $\gamma=\frac{1}{C}$ is the only potential way we can find to determine $\gamma$ that both satisfy the convergence and privacy requirement.** Besides, we have also considered employing alternative methods like employing an exponential or power function of the number of samples, which need to share  $Q_k(\frac{n_{k,c}}{n_k})$  but achieve similar performance. The related experiments are shown in the following table. **We will disscuss more and highlight our design intuition in the submission.** |Partition|Metric|$Q_k(x)=e^{x}$|$Q_k(x)=x^{\frac{1}{2}}$|$Q_k(x)=x$(ours)| |---|---|---|---|---| |IID|PA|95.12|95.43|94.84| ||GA|94.32|93.99|94.66| |$\beta=0.5$|PA|96.18|95.56|96.27| ||GA|93.28|93.22|93.66| |$\beta=0.1$|PA|98.33|98.21|98.52| ||GA|78.95|77.18|78.88| | **Q7:** We'll explain notations when they first appear. **Q8:** We recommand the default value to 10e3. With larger $E_w$ (even 10e7), FedGELA can also perform better than most of the methods and far better than FedAvg. **Limitaion:** We conduct the additional experiments on the PACS dataset[2], which is commonly used for analyzing feature heterogeneity. FedISIC-2019 used in our paper also includes feature shifts as the images are collected by different hospitals in different situations. As shown below, FedGELA remains applicable and achieves commendable performance even under feature heterogeneity. Our speculation is that, the local classifiers trained on distinct feature domains may exhibit bias. Using the optimal separation structure like FedGELA aids in enhancing performance. |Dataset|split|FedAvg|Best Baseline|FedGELA| |---|---|---|---|---| |PACS|PA|97.60|98.99|98.65| ||GA|82.30|84.42|85.06| |Fed-ISIC2019|PA|77.27|78.91|79.27| ||GA|73.59|74.98|75.85| | [1]Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training.(NeurIPS21) [2]Deeper, broader and artier domain generalization.(ICCV17) --- Rebuttal Comment 1.1: Title: Invitation to rolling discussion for the possible remaining concerns Comment: Dear Reviewer, We have thoroughly considered your comments and provided detailed response to address your concerns about technical novelty, clarification, more comparison and more verification. We would like to ask you whether you have remaining or more concerns, so that we can try our best to **timely answer you** during this reviewer-author discussion phase, instead of giving some incomplete demonstrations when approaching to the deadline of this phase. Best, The Author of Submission7489 --- Reply to Comment 1.1.1: Comment: Dear Reviewer SUwJ: We appreciate your questions and suggestions, which helps us improve the submission. As your rating score is negative, we would like to know that whether our detailed responses have addressed your concerns. If not, we would like to have a further discussion and explanation with your remaining questions. As the deadline is approaching and we have not received your response, we really appreciate that the reviewer can feedback your points on this submission and promote the discussion. Thank you very much. Best, The authors of submission7489 --- Rebuttal Comment 1.2: Comment: The rebuttal from the authors addresses the concerns in my review, and the additional experiments also indicate the effectiveness of their proposed method. --- Reply to Comment 1.2.1: Comment: Thank you for your positive support. We will carefully follow your comments to improve the submission in the revision. Best, The authors of Submission7489 --- Rebuttal 2: Title: concerns are addressed Comment: The rebuttal from the authors addresses the concerns in my review, and the additional experiments also indicate the effectiveness of their proposed method. --- Rebuttal Comment 2.1: Comment: Thank you very much for your confirmation. If the concerns have been addressed, would you like to raise the rating score? We will carefully follow your comments and include all the experiments and discussions in the revision. Best, The authors of Submission7489
Summary: This paper introduces a novel Federated Learning Algorithm to address the Partially class-disjoint data (PCDD) problem. The approach is based on the simplex equiangular tight frame (ETF) phenomenon to solve the angle collapse issue and introduces a second projection to personalize an adapted structure to save space. The main contributions can be summarized in three aspects: identifying the angle collapse and space waste challenges in the PCDD problem, introducing the novel FedGELA algorithm, and conducting a range of experiments to evaluate its performance. The paper also includes a theoretical analysis with convergence analysis. Strengths: 1. The paper is well-written with a proper structure and clear explanations. The presentation of the authors' ideas is easy to follow due to the effective use of figures and notations. 2. The methodology of the FedGELA algorithm is interesting, and the mathematical deductions are sufficient. The algorithm is clear and provides enough information for reproducibility. 3. The algorithm has been wisely experimented, and the plots are suitable and clear. Weaknesses: 1. The authors claim that "none of the existing methods can intrinsically mitigate PCDD challenges to achieve holistic improvement in the bilateral views of federated learning." However, the PCDD problem seems closely related to the general non-iid (non-independent and identically distributed) problem. The main differences between these two problems have not been explained. 2. Based on my understanding, if PCDD is different from the non-iid problem, it should perhaps be related to the multi-label problem. However, the presentation of the paper, the experimental data, and the methods of experimental comparison all tend to be more inclined towards non-iid problems. Non-iid is a common problem setting, which contradicts the first author's claim of contributions. 3. The performance improvement is limited. 4. I disagree with the statement that "restricting local structure will waste feature space and limit the training of the local model on existing classes." I believe the notion of "waste of space" is unfounded as it appears to have no impact on computational efficiency or performance improvement. Conclusion: The methodology and algorithm presented in this paper are interesting, and the paper is written in high quality. However, there seems to be an important flaw in the problem setting. PCDD appears not to be a new issue but rather a non-iid problem under some special conditions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We really appreciate your positive support and the constructive comments. Regarding the weakness mentioned by the reviewer, we provide the detailed response as below, and hope that can address your concern. Let W denotes the shorthand of Weaknesses.** **Reply to W1 and W2:** 1)We would like to explain a bit about the PCDD problem. **It actually belongs to the data heterogeneity case, but does have a very unique characteristic different from the ordinary heterogeneity problem.** That is, if each client only has a subset of classes, it does not share the optimal Bayes classifier with the global model that considers all classes on the server side. While in the ordinary heterogeneity where local clients have all classes of samples but only differ in the class distributions, they do share an optimal Bayes classifier including the global model on the server side. 2)Regarding the claim of the first contribution, the contextual description (Line 63) is for the PCDD problem (Line 64). We did not mean to imply that Non-IID is yet under-explored. **In the revision, we will carefully consider the reviewer's question and add more explanation in terms of the relationship between PCDD and the data heterogeneity (i.e., multiple Non-IID distributions) for clarity.** We will refine this description to avoid misunderstanding. **Reply to W3:** We would like to kindly argue about the effectiveness of our method by the following three points. 1)We must note that the reported average improvement of 1.5% to the best baseline encompasses all settings and all datasets, including both Non-IID and IID scenarios. Acutally, there is a marginal space for all algorithms to enhance FedAvg in IID situations, while in Non-IID situations, our method has a larger improvement compared to existing approaches, particularly with a 7% and 11.39% generic improvement over FedAvg on CIFAR10, and a 2.12% and 2.58% generic improvement over the best baseline on CIFAR100. These results demonstrate the effectiveness of FedGELA. 2)For the consideration of practical data distribution and cohesion with most previous works, we use dirichlet distribution to split the dataset, which will generate heterogeneity data more than PCDD and might limit the potential better improvement. **To further address the reviewer's concern, we decouple the PCDD setting and the ordinary data heterogeneity (Non-PCDD), and conduct the corresponding experiments on pure PCDD settings.** In the following table, we use PxCy to denote the dataset is divided in to x clients with y classes of samples, and in each round, 10 clients are selected into federated training. The training round is 100. According to the experimental results, we can see that our **FedGELA achieves significant improvement especially 18.56% to FedAvg and 10.04% to the best baseline on CIFAR10(P50C2)**. | Dataset(Split) | Metric | FedAvg | Best Baseline | FedGELA | | --- | --- | --- | --- | --- | | CIFAR10(P10C2) | PA | 92.08 **+3.76** | 94.07 **+1.77** | **95.84** | | | GA | 47.26 **+12.34** | 52.02 **+7.58** | **59.60** | | CIFAR10(P50C2) | PA | 91.74 **+3.68** | 93.22 **+2.20** | **95.42** | | | GA | 36.22 **+18.56** | 44.74 **+10.04** | **54.78** | | SVHN(P10C2) | PA | 95.64 **+3.11** | 97.02 **+1.73** | **98.75** | | | GA | 69.34 **+14.22** | 76.06 **+7.50** | **83.56** | | SVHN(P50C2) | PA | 94.87 **+3.50** | 96.88 **+1.49** | **98.37** | | | GA | 66.94 **+10.24** | 72.97 **+4.21** | **77.18** | | 3)Additionally, our algorithm is **easy to reproduce, requiring almost zero burden** in terms of local storage, local computation, and communication costs compared to FedAvg. Unlike other methods, our approach **does not require fine-tuning for personalization**. This makes it more accessible and practical for implementation in real-world scenarios. **Reply to W4:** Probably, it is because our words "restricting local structure" that confuses the reviewer. Actually this sentence discusses that if we align the global classifier structure as the local classifier structure, it does greatly limit the performance of personalization, since under PCDD, the global model (with the support of full classes) and the local models (with the support of a subset of classes) do not intrinsically share an optimal Bayes classifier (please refer to the explanation for W1 & W2). Recent work FedRod [1] that configures different classifiers on the server and client sides also supports this point. Regarding "waste of space", it is a further possible explanation about why aligning the structures under PCDD induces the performance degeneration in terms of the PA metric (for personalization). An intuitive illustration is shown in Figure 2. Empirically as shown in Table 4, aligning the local classifier structure as the global structure, namely ETF (we denote this method FedGE) can not achieve best personal performance. Especially on the real-world PCDD dataset Fed-ISIC2019, the restricting greatly limits the personal performance. Here we show part of results of Table 4: | | GE | LA | CIFAR100 | | | | Fed-ISIC2019 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | #Partition | | Full Parti. | | Partial Parti. | | Real World | | | | #Metric | | PA | GA | PA | GA | PA | GA | | FedAvg | - | - | 69.09 | 62.80 | 56.46 | 54.28 | 77.27 | 73.59 | | FedGE | $\checkmark$ | - | 71.46 | 66.02 | 62.67 | 58.98 | 69.88 | 75.54 | | FedGELA | $\checkmark$ | $\checkmark$ | 74.23 | 66.05 | 66.33 | 58.81 | 79.27 | 75.85 | | **We will refine this sentence to clarify our meaning and provide the detailed explanations about our statement.** [1]On bridging generic and personalized federated learning for image classification --- Rebuttal Comment 1.1: Comment: Dear Reviewer UNoN: As you do have a few arguments about some points in our submission, we would like to kindly ask whether our explanations address your concerns. If not, we would like to have a further discussion with you. We appreciate the reviewer's challenges on some points, which make us improve the submission, and welcome your further discussion. Thank you very much. Best, The author of submission7489
Summary: The authors study the problem of federated learning over partially class-disjoint data and propose using equiangular tight frame (ETF) techniques that allows achieving better performance in both the global and personal learning tasks. They show that the existing federated learning approaches suffer either from angle collapse for locally missing classes or from waste of space for locally existing classes, and propose their approach FedGELA which solves both the issues. Strengths: + Extensive experiments comparing the proposed approach, FedGELA, with the existing federated learning approaches. + Detailed theoretical analysis of the proposed approach. + Highlighting the issues of angle collapse and waste-of-space in the federated learning with partially class-disjoint data. Weaknesses: - Borrows the existing EFT techniques and hence the novelty seems to be limited. - Overall improvement in average accuracy is marginal (~1.5%) over the existing approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I like the experimental evaluations and thorough comparison with the prior federated learning works, but I'm concerned about the algorithmic novelty of the proposed approach. Most of the techniques seem to be adapted from the prior known literature on EFT. Can the authors highlight the technical difficulties in directly applying the prior techniques in solving the PCDD problem? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I don't think there are any negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thanks for your positive support and the constructive comments. Regarding the questions and weaknesses mentioned by the reviewer, we provide the point-to-point response as below, and hope that can address your concern. Let Q, W and A denote the shorthand of Question, Weaknesses and Answer respectively.** > **W1 and Q1:** Borrows the existing ETF techniques and hence the novelty seems to be limited. I like the experimental evaluations and thorough comparison with the prior federated learning works, but I'm concerned about the algorithmic novelty of the proposed approach. Most of the techniques seem to be adapted from the prior known literature on ETF. Can the authors highlight the technical difficulties in directly applying the prior techniques in solving the PCDD problem? **A:** The main problem in addressing the PCDD dilemma with ETF lies in the inability to remedy the loss in the clients. Although ETF can guarantee the global optimum on the server side, from the local view, only a subset of classes are present, and their optimal Bayes classifier is no longer shared with the server, where all classes are considered. Note that, this is the most distinction for PCDD from the ordinary heterogeneity (Non-PCDD) where all classes appear but only differ in the distribution shift. How to deal with this problem is a biggest technical challenge different from directly using ETF. This is the reason why we propose the **Bilateral Curation** in principle. To further show this distinction, we summarize the comparision between our FedGELA and directly using ETF (termed as FedGE for simplicity) in the following table. | | GE | LA | CIFAR100 | | | | Fed-ISIC2019 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | #Partition | | Full Parti. | | Partial Parti. | | Real World | | | | #Metric | | PA | GA | PA | GA | PA | GA | | FedAvg | - | - | 69.09 | 62.80 | 56.46 | 54.28 | 77.27 | 73.59 | | FedGE | $\checkmark$ | - | 71.46 | 66.02 | 62.67 | 58.98 | 69.88 | 75.54 | | FedGELA | $\checkmark$ | $\checkmark$ | 74.23 | 66.05 | 66.33 | 58.81 | 79.27 | 75.85 | | As can be seen, FedGELA consistently maintains the advantage in the metric of PA (personalization performance for local clients), while FedGE is even significantly worse than FedAvg on the real-world dataset Fed-ISIC2019 in the metric of PA. Another noticeable point is that we are the first to employ ETF techniques into FL and before us, the corresponding convergence analysis is unknown. We provide both local and global convergence guarantee for naively combining ETF with FedAvg(FedGE) and our FedGELA. **We will follow the reviewer's advice to further highlight the technical challenge and add these discussion in the submission.** > **W2:** Overall improvement in average accuracy is marginal (~1.5%) over the existing approaches. **A:** 1)We must note that the **reported average improvement of 1.5% encompasses all settings, including both IID and Non-IID scenarios**. Acutally, there is a marginal space for all algorithms to enhance FedAvg in IID situations, while in Non-IID situations, our method has a larger improvement of compared to existing approaches, particularly with a 7% and 11.39% generic improvement over FedAvg on CIFAR10, and a 2.12% and 2.58% generic improvement over the best baseline on CIFAR100. These results demonstrate the effectiveness of FedGELA. **To further address the reviewer's concern, we decouple the PCDD setting and the ordinary data heterogeneity (Non-PCDD), and conduct the corresponding experiments on pure PCDD settings.** In the following table, we use PxCy to denote the dataset is divided into x clients with y classes of samples, and in each round, 10 clients are selected into federated training. The training round is 100. According to the experimental results, we can see that our **FedGELA achieves significant improvement especially 18.56% to FedAvg and 10.04% to the best baseline on CIFAR10(P50C2).** | Dataset(Split) | Metric | FedAvg | Best Baseline | FedGELA | | --- | --- | --- | --- | --- | | CIFAR10(P10C2) | PA | 92.08 **+3.76** | 94.07 **+1.77** | **95.84** | | | GA | 47.26 **+12.34** | 52.02 **+7.58** | **59.60** | | CIFAR10(P50C2) | PA | 91.74 **+3.68** | 93.22 **+2.20** | **95.42** | | | GA | 36.22 **+18.56** | 44.74 **+10.04** | **54.78** | | SVHN(P10C2) | PA | 95.64 **+3.11** | 97.02 **+1.73** | **98.75** | | | GA | 69.34 **+14.22** | 76.06 **+7.50** | **83.56** | | SVHN(P50C2) | PA | 94.87 **+3.50** | 96.88 **+1.49** | **98.37** | | | GA | 66.94 **+10.24** | 72.97 **+4.21** | **77.18** | | 2)Additionally, our algorithm is **easy to reproduce, requiring almost zero burden** in terms of local storage, local computation, and communication costs compared to FedAvg. Unlike other methods, our approach **does not require fine-tuning for personalization**. This makes it more accessible and practical for implementation in real-world scenarios. --- Rebuttal Comment 1.1: Comment: Thank you for providing clarifications. I have no further questions for the authors. --- Reply to Comment 1.1.1: Comment: Thank you very much for your confirmation. We will carefully include the contents regarding your suggestion in the revision. Best, The Author of Submission7489
Rebuttal 1: Rebuttal: We would like to thank all the reviewers(nvCV, UNoN, SUwJ and J5UX) for their thoughtful suggestions on our paper, and appreciate that the reviewers have multiple positive impressions of our work, including: - **well defined problem (J5UX) and a clear motivation (SUwJ and J5UX)** - **a novel and interesting algorithm (UNoN)** - **solid theoretical justification (nvCV and UNoN) and sufficient mathematical deductions (SUwJ)** - **extensive and reasonable experiments(nvCV, UNoN and J5UX) with good results (SUwJ)** - **well-written paper with a proper structure (UNoN) and clear illustration (SUwJ).** We provide a summary of our responses, and we will add all corresponding discussions, reviewer-recommended related work and experimental results into the manuscript. For detailed responses, please refer to the feedback of each comment/question point-by-point. **Introduction and Related Works:** - We clarify some statements like "global and local view", "model skew" and "save space".(for the question of the Reviewer J5UX) - We explain the relationship between PCDD and Non-IID and the difference between PCDD and traditional label heterogeneity in federated learning.(for the question of the Reviewer UNoN) - We further analyze the statements "waste of space" and "converged angles". (for the question of the Reviewers UNoN and SUwJ respectively) **Method:** - We highlight and clarify the technique difficulties in applying simplex ETF when solving PCDD challenges. (for the question of the Reviewers nvCV) - We further analyze the chosen of the personal distribution matrix. (for the question of the Reviewer SUwJ) - We highlight our technical innovation and provide the comparison to FedGen. (for the question of the Reviewer SUwJ) - We add more descriptions about some notations. (for the question of the Reviewer SUwJ) **Experiments:** - We add FedGEN into baselines. (for the question of the Reviewer SUwJ) - We decouple the PCDD settings and the ordinary (Non-PCDD) data heterogeneity and verify our algorithm at pure PCDD settings to further show the effectiveness. (for the question of the Reviewers nvCV, UNoN and J5UX) - We verify our algorithm on two more real-world PCDD federated challenges. (for the question of the Reviewer J5UX) - We provide suggestions about choosing the hyper-parameter. (for the question of the Reviewer SUwJ) - We test our method under feature heterogeneity. (for the question of the Reviewer SUwJ) **We appreciate all reviewers’ time and effort again. We will add all corresponding discussions, reviewer-recommended related work and experimental results into the manuscript. We are looking forward to your reply!**
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adapting Fairness Interventions to Missing Values
Accept (poster)
Summary: This paper presents an information-theoretic finding that reveals the fundamental limitation of impute-then-classify approaches when considering fairness-accuracy tradeoffs. Additionally, it introduces three techniques for addressing missing features within the framework of linear fair classifiers, as well as an ensemble method for non-linear counterparts. One notable aspect of these developments is their ability to capture missing pattern information, which is overlooked by impute-then-classify algorithms. Furthermore, the paper presents experimental results that demonstrate the superior tradeoff performances of the proposed methods, particularly when dealing with datasets that exhibit prominent missing patterns. Strengths: S1. The paper focuses on a significant issue that arises in numerous applications. S2. By utilizing the concept of mutual information, the paper reveals the fundamental limitation of impute-then-classify methods. S3. The proposed methods effectively harness the information embedded within missing patterns. Weaknesses: W1. I believe that scenarios where sensitive attributes are missing present more practical relevance, importance, and challenges compared to scenarios where features are missing. Although the authors mention the possibility of extending their findings to such settings in the conclusion section, the specific details of this extension remain unclear as the computation of fairness constraints relies on knowledge of sensitive attributes. W2. The main inspiration for this paper appears to be derived from [3]. While the main contribution of this paper lies in its adaptation to the fairness context, it does not take into account the scenario where sensitive attributes are missing. Exploring this more challenging setting may open up the opportunity for a distinct idea to be explored. W3. The paper introduces several methods for linear and non-linear settings, suggesting that the choice should depend on the data distribution. However, it does not provide concrete guidelines as to how to make the choice. W4. Theorem 1 looks interesting, but the main body of the paper lacks technical discussion, not even including proof sketch. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In line of W1, can you provide in-depth discussion on the extension of the sensitive-attribute-missing scenario? In light of W4: any intuition why the fundamental degradation of accuracy is expressed in terms of mutual information? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see Weaknesses in the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful read of our paper and constructive comments! --- **W1. I believe that scenarios where sensitive attributes are missing present more practical relevance, importance, and challenges compared to scenarios where features are missing. Although the authors mention the possibility of extending their findings to such settings in the conclusion section, the specific details of this extension remain unclear as the computation of fairness constraints relies on knowledge of sensitive attributes.** A1. We completely agree that the problem of missing group (sensitive) attributes is of significant practical importance. Indeed, there is a growing body of research (Kallus et al. 2022, Zhang and Long 2021) in fair ML dedicated to addressing this issue. However, we also emphasize that the problem of missing input features is equally crucial and widespread, yet less studied. For instance, in the HSLS dataset used in our experiment, 35.5% of White and Asian students did not report their secondary caregiver’s highest level of education; this proportion increases to 51.0% for underrepresented minority students. Missingness patterns can vary significantly across population groups and hinder fairness-accuracy performance if not adequately accounted for. Unfortunately, the matter of missing input features has not received adequate attention in fair ML literature, with the majority of interventions assuming complete input features. The primary objective of this paper is to bridge this gap and offer a comprehensive study encompassing theory and algorithms for training fair classifiers in the presence of missing input feature values, particularly when missingness patterns vary per population group (as observed in practice). We aim to shed light on this overlooked aspect and propose solutions to challenges arising from disparate missingness patterns. To address your concerns, we will revise our abstract and introduction to provide a clearer presentation of our setup and discuss the issue of missing group attributes by doing a more thorough job acknowledging this important line of work. References: Kallus, N., Mao, X., & Zhou, A. (2022). Assessing algorithmic fairness with unobserved protected class using data combination. Zhang, Y., & Long, Q. (2021). Assessing fairness in the presence of missing data. --- **W2. The main inspiration for this paper appears to be derived from [3]. While the main contribution of this paper lies in its adaptation to the fairness context, it does not take into account the scenario where sensitive attributes are missing.** A2. Please refer to our response to W1. We would like to emphasize the differences between our study and [3]. Not only do we adapt their algorithms for the fairness context, but we also develop a *new* universal adaptive algorithm outlined in Section 5 which allows for the adaptation of *any* group-fairness intervention to account for missing values. In contrast, [3] focuses on linear models only. In Section 3, we also derive a theorem that articulates the fairness risks associated with training on imputed data. Furthermore, our work involves extensive numerical experiments using state-of-the-art fairness interventions across multiple benchmark datasets (see Appendix E). --- **W3. The paper introduces several methods for linear and non-linear settings, suggesting that the choice should depend on the data distribution. However, it does not provide concrete guidelines as to how to make the choice.** A3. We thank the reviewer for raising this point! Please refer to our response to Reviewer oma2’s Q1 and Q2. --- **W4. Theorem 1 looks interesting, but the main body of the paper lacks technical discussion, not even including proof sketch.** A4. We are glad to learn that you find Theorem 1 interesting! This theorem was established by creating a specific data generating distribution and identifying the optimal classifiers for both data with missing values and imputed data. We intend to add a sketch of the proof in order to provide further clarity and enhance understanding of the theorem's underlying rationale. --- **Q1. In line of W1, can you provide in-depth discussion on the extension of the sensitive-attribute-missing scenario?** A5. Absolutely – please refer to section (a) of our answer to Q1 in the global response. --- **Q2. In light of W4: any intuition why the fundamental degradation of accuracy is expressed in terms of mutual information?** A6. The underlying rationale for Theorem 1 is that as missing patterns provide more information about the predictive label, the fairness-accuracy performance of impute-then-classify deteriorates. This occurs as imputing the data loses information about the label that was contributed by the missing patterns. We use mutual information to measure the extent of this information, since mutual information is a standard metric for assessing the dependence between two random variables. It also enables us to apply Fano’s inequality to relate the information loss from imputation to classification error probability (see Appendix B).
Summary: This paper investigates the impact of missing values on algorithmic fairness and highlights the limitations of the commonly used "impute-then-classify" approach. The authors propose algorithms that preserve the information encoded within missing patterns, leading to improved fairness and accuracy. Strengths: 1. The paper theoretically shows that for the fairness measure of equalized odds, impute-then-classify can significantly reduce the performance. Furthermore, it is also shown that the reduction in the performance grows with the mutual information between the missing pattern and the labels. 2. The paper proposes 3 methods to handle missing values for linear fair classifiers by encoding the missing value patterns. These methods are interpretable and can be combined with any preexisting fairness intervention method including in-processing and post-processing methods. 3. The paper extensively evaluates the proposed method on synthetic data as well as real data. The authors also show the superiorty of the proposed methods for linear setting for the MNAR missing pattern. Overall, the paper is mostly clear and has original ideas. Weaknesses: 1. The proposed method is only applicable to fair classification and when the group attributes are discrete. Furthermore, the approach allows missingness only in the non-group attribute input features, i.e., the method requires the group attribute and the labels to be fully observed. It might be useful to extend the method for fair regression and for missingness in group attribute and labels. 2. The results in the paper focus only on a single measure of fairness, i.e., equalized odds. (By the way, the MEO abbreviation in the figures in the main body should be expanded in the caption when first used.) It might be useful to extend the method for other notions of fairness and provide analogous empirical evaluation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. What is the definition of the function h in equation (1)? Also, Disc(h) should be Disc(h(X)) in equation (1). 2. Can the authors elaborate on how their framework can be extended to multiaccuracy and multicalibration notions of fairness? Are there any empirical evaluations for these settings that the authors have performed? 3. It would help to show empirically how the proposed method performs with respect to the approach of Jeong et al and also state the corresponding runtimes. Also, what does MIA in line 96 stand for? 4. How are the B datasets created in lines 249-250? Line 248 indicates there is only one combined dataset. 5. The algorithm in Section 5 seems closely connected to the idea of using bootstraped subsamples proposed in 'Group Fairness with Uncertainty in Sensitive Attributes'. Could the authors clarify on the similarities and the differences? It might be useful to talk about this in related work. 6. For the HSLS dataset, why do the authors only consider datapoints where race and 9th grade math test score are present? 7. The focus of the paper is on algorithm design for the case of missing non-group attribute input features. I would advise the authors to state this right in the abstract and also talk about it in the introduction. A lot of literature on algorithmic fairness focuses on missing group attribute input features and it is easy to inherently assume that this paper does the same until one reaches line 107. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors talk about the limitations of their work in Section 7. While they briefly talk about potential negative societal impact of data imputation in Section 7 too, I encourage the authors to talk about the potential negative impact of their methodology too. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and for appreciating the merits of the work! **W1. The proposed method is only applicable to fair classification and when the group attributes are discrete. Furthermore, the approach allows missingness only in the non-group attribute input features, i.e., the method requires the group attribute and the labels to be fully observed. It might be useful to extend the method for fair regression and for missingness in group attribute and labels.** A1. This is a great point! Please refer to section (a) in our answer to Q1 in the global response. --- **W2. The results in the paper focus only on a single measure of fairness, i.e., equalized odds. (By the way, the MEO abbreviation in the figures in the main body should be expanded in the caption when first used.) It might be useful to extend the method for other notions of fairness and provide analogous empirical evaluation.** A2. Thank you for the point of clarification. We note that while the empirical evaluations with FairProjection use equalized odds, we use equality of opportunity (FNR difference) for the results with DispMistreatment. These are primary fairness measures supported by the FairProjection and DispMistreatment fairness interventions. Additionally, while the fairness guarantee for the ensemble algorithm in Section 5 is presented for equalized odds, the guarantee also holds for other fairness measures such as statistical parity and equality of opportunity. We will update the caption with the full name of MEO in the revised paper. --- **Q1. Clarification of equation (1).** A3. The function $h: \mathcal{X}\to \mathcal{Y}$ represents a classifier, predicting the label $y$ based on input features $x$. We will make this clear in the revised paper and replace $Disc(h)$ with $Disc(h(X))$ as you suggested. --- **Q2. Extending framework to multiaccuracy and multicalibration notions of fairness.** A4. Thanks for pointing this out – please refer to section (b) in our answer to Q1 in the global response where we address this point. --- **Q3. Comparing to Jeong et al; definition of MIA** A5. This is a great point. We encountered some challenges when attempting to compare our proposed methods to the FairMIPForest algorithm in Jeong et al., in that 1) the code for FairMIPForest does not support MEO and 2) DispMistreatment is a linear fairness intervention, hindering a sound comparison between FairMIPForest and either DispMistreatment or FairProjection. MIA stands for the “missingness incorporated in attribute” approach for training decision trees with missing values (see Twala et al. 2008). We will provide the full name in the revised paper. References: Twala, B. E., Jones, M. C., & Hand, D. J. (2008). Good methods for coping with missing data in decision trees. --- **Q4. How are the B datasets created in lines 249-250? Line 248 indicates there is only one combined dataset.** A6. Lines 247-249 describe the procedure to draw a single resampled dataset from the original dataset. The B datasets are created by repeating this procedure B times. We will clarify this in the revised paper. --- **Q5. Comparison to Bootstrap-S algorithm in Shah et al.** A7. Thank you for bringing this work to our attention. Our algorithm in Section 5 and the Bootstrap-S algorithm presented in Shah et al. are indeed similar in using bootstrapping to satisfy strengthened fairness conditions. The key difference is that our algorithm explicitly uses (known) sensitive attributes and labels when drawing subsamples to ensure sample diversity. We will include this comparison in the related work section in the updated paper. --- **Q6. For the HSLS dataset, why do the authors only consider datapoints where race and 9th grade math test score are present?** A8. This is a great question. For HSLS, we used race as the group attribute and 9th grade math test score as the label. As mentioned above, the fairness interventions with which our proposed methods are used may require knowledge of the group attribute, and our empirical results used interventions that use knowledge of the group attribute and label to calculate fairness metrics. --- **Q7. Clarifying the focus of the paper in the abstract and introduction.** A9. Thank you for highlighting this issue. We will ensure that both the abstract and introduction clearly state the context being considered in this paper. --- **Q8. Elaborating on potential negative impact of methodology in Section 7.** A10. We thank the reviewer for raising this important point. We acknowledge that using missingness information in a fair classifier may have potential negative impacts. For example, an individual may be incentivized to purposefully hide data if their true values are less favorable than even \texttt{NA}. In missing pattern clustering, an individual may be classified less favorably or accurately than a classifier from a different cluster. These scenarios highlight 1) important considerations with respect to individual and preference-based fairness notions (cf. Ustun et al. 2019), and 2) the importance of carefully weighing the advantages and disadvantages of each of our proposed methods prior to use. We will elaborate on these issues in the revised paper. Reference: Ustun, B., Liu, Y., & Parkes, D. (2019). Fairness without harm: Decoupled classifiers with preference guarantees. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I've also gone through the reviews from other reviewers. Here are a few comments regarding your response: W1. I recognize the distinction between challenges arising from missing input features versus missing sensitive attributes. It might be beneficial for the authors to acknowledge this limitation explicitly, emphasizing that the framework considers missingness in features but not in labels or sensitive attributes. Also, there was no response addressing the fact that the method is limited to classification tasks (not regression) and discrete sensitive attributes (not continuous ones). W2. It would be useful if the use of FNR difference is highlighted somewhere in the paper. Clearly articulating the proposed method's applicability would be valuable. Q3. Could you provide further insight into why the linearity of the fairness intervention "DispMistreatment" hampers the comparison? Additionally, I'm curious about the compatibility of the FairMIPForest framework with other fairness interventions like Reduction, EqOdds, ROC, or Leveraging. Does the FairMIPForest framework support the assessment of FNR difference? Q6. Please mention the choice of variables as sensitive attributes and labels in the appropriate sections. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: Thank you so much for your response and follow-up questions! --- **W1. I recognize the distinction between challenges arising from missing input features versus missing sensitive attributes. It might be beneficial for the authors to acknowledge this limitation explicitly, emphasizing that the framework considers missingness in features but not in labels or sensitive attributes. Also, there was no response addressing the fact that the method is limited to classification tasks (not regression) and discrete sensitive attributes (not continuous ones).** A1. Yes – we will clarify the scope of our framework in the updated paper and clearly state the limitation, as well as point to the references on handling missing sensitive attributes. Regarding fair regression, since the methods in Section 4 involve the input features only, we can apply them to fair regressors in an identical fashion as described for fair classifiers. Similarly, for continuous sensitive attributes, the methods can be applied in the same way provided the underlying fairness intervention is designed to handle continuous sensitive attributes. In general, the methods in section 4 depend on sensitive attributes and/or the target variable only to the extent of the adapted fairness intervention. Adapting the method in Section 5 to non-discrete labels (as is the case for fair regression) and/or continuous sensitive attributes is more complex because the resampling process uses the discrete nature of the sensitive attribute and label to preserve the joint distribution of the sensitive attribute and label in the subsampled dataset. We touched on this limitation in lines 355-358, but will make it explicit in a revised manuscript. --- **W2. It would be useful if the use of FNR difference is highlighted somewhere in the paper. Clearly articulating the proposed method's applicability would be valuable.** A2. Thank you for raising this point – we will mention the use of FNR difference prior to the experimental results involving the metric and clarify that our proposed methods work across several group fairness metrics including (but not limited to) equalized odds, equality of opportunity (FNR difference) and statistical parity. --- **Q3. Could you provide further insight into why the linearity of the fairness intervention "DispMistreatment" hampers the comparison? Additionally, I'm curious about the compatibility of the FairMIPForest framework with other fairness interventions like Reduction, EqOdds, ROC, or Leveraging. Does the FairMIPForest framework support the assessment of FNR difference?** A3. Absolutely. While FairMIPForest does support FNR difference, comparing a decision tree classifier such as FairMIPForest with a linear classifier such as DispMistreatment can be challenging because the models differ in their expressivity. For example, while FairMIPForest can capture nonlinearities in the data that cannot be captured by DispMistreatment, FairMIPForest is constrained by the depth of the decision tree. Additionally, we found that running FairMIPForest yielded a worse fairness-accuracy curve than DispMistreatment despite having a greater runtime. We believe the reason for this poor performance is because the available FairMIPForest code uses early stopping in the training process to account for the computational cost of solving the mixed-integer optimization in the algorithm’s implementation. Regarding the other fairness interventions benchmarked in Appendix E.1, only Leveraging supports assessing FNR difference and could thus be compared against FairMIPForest. --- **Q6. Please mention the choice of variables as sensitive attributes and labels in the appropriate sections.** A6. Will do. In section 6.1, we mention that for HSLS, the sensitive attribute is race and the label is a student’s test performance – we will clarify that the label refers to a student’s 9th grade test score.
Summary: This work investigates how different types of missing data affect algorithmic fairness, and provide algorithms that work to address this issue. Three types of missing data are considered: MCAR (missing data is independent of the observed and unobserved values), MAR (missing data depends on the observed values only, and MNAR (dependence of the missing data on the unobserved values). The contributions are (1) Theory showing that a model trained on imputed data (the classic impute-then-classify method) has unavoidable reduced performance; (2) Introduce strategies for adapting mitigation strategies in fair classification to missing data; and (3) provide an empirical analysis that supports the theory and compares against state-of-the-art fair classification algorithms that use impute-then-classify. Strengths: *ORIGINALITY.* I am not familiar with work in the missing data space, but looked through the related work section, skimmed a few of the works mentioned, and looked briefly at the literature on missing data in ML. This work seems to differ from previous contributions, and is adequately cited. The difference between the approach in this work and the approaches of previous work seems adequately explained. *QUALITY.* The main contribution of this paper is to provide alternative methods to the classically used imputation-then-classify strategy for dealing with missing data in fair classification. This is a well-motivated problem, as imputation is used regularly when data is missing, and this work investigates the information lost when performing imputation, and provides an alternative, competitive strategy for dealing with missing data. *CLARITY.* This work is clearly structured and well-written---I enjoyed the read! *SIGNIFICANCE.* This work is a useful contribution to the literature on mitigating unfairness when values are missing from the feature vector (not including the sensitive attribute). The strategies for finding suitable models in this setting can be used for linear and non-linear classifiers, and the empirical analysis shows promising results that are competitive (and often outperform) methods that use imputation. Weaknesses: The fairness of models returned by the algorithms is not captured in the graphs in the main body of the work. The paper touts that training classifiers from imputed data can significantly worsen values of group fairness (and average accuracy), but their empirical analysis (in the main body) only compares the accuracy over datasets often used in fair classification. *MINOR COMMENTS* - The core contributions of this work seem to be applicable to general cases of missing data, not just fairness. - Lines 114-117, providing a small example of the different types of reason for missing data could strengthen the description in this paragraph. - Eqn 1 define the indicator function and distribution of interest in the expected value - Fano's inequality (line 193) should be cited Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors elaborate on the group fairness achieved by their strategy vs the other baselines in the empirical analysis? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and we are glad to learn that you found our paper enjoyable to read! --------- **Q1. The fairness of models returned by the algorithms is not captured in the graphs in the main body of the work. The paper touts that training classifiers from imputed data can significantly worsen values of group fairness (and average accuracy), but their empirical analysis (in the main body) only compares the accuracy over datasets often used in fair classification.** A1. Thank you for raising this clarification point. We highlight that the x-axis of all the plots in the paper (Figs. 1-3 in the main text, and Figs. 6-9 in the Appendix) have a group fairness metric in the x-axis, and accuracy in the y-axis. Consequently, our empirical analysis explicitly compares accuracy and group fairness values achieved by different imputation strategies and fairness interventions across datasets and missing value patterns. Fairness-accuracy plots are standard visualizations for benchmarking fairness interventions (see, for example, [1] and [2]). --------- **Q2. The core contributions of this work seem to be applicable to general cases of missing data, not just fairness.** A2. Indeed, you are correct. A substantial number of our methods and algorithms are applicable to a range of supervised learning scenarios that involve missing values. We chose to concentrate on the fairness implications of missing data as it is a practical problem but has not received adequate attention in the fair ML literature. --------- **Q3. Lines 114-117, providing a small example of the different types of reason for missing data could strengthen the description in this paragraph.** A3. Yes, this is a great point! To illustrate the three missing mechanisms, we can use an example of student survey responses: MCAR: For each student, each question has an equal probability p of not being answered. MAR: Certain questions are more likely to be left blank depending on other factors. For example, lower-income students may be less likely to have taken expensive tests and consequently are more likely to leave questions about test scores blank. MNAR: The probability of a question being left blank depends on the true answer. For example, a student whose parents have not completed high school may leave a question on the parents’ highest degree blank. We will include this example in the revised paper. --------- **Q4. Eqn 1 define the indicator function and distribution of interest in the expected value.** A4. The indicator function takes the value of $1$ if $h(x) = y$; otherwise, it is 0. The distribution of interest is the data generating distribution $P_{X,Y,S}$ where $X$ is the input feature vector which may contain missing values; $Y$ is the label; and $S$ is the group attribute. We will include this clarification in the revised paper. --------- **Q5. Fano's inequality (line 193) should be cited** A5. Thanks for pointing out this issue! Please refer to Theorem 6.3 in the textbook by Polyanskiy and Wu, 2022. This reference will be incorporated into our revised paper. —Polyanskiy, Y. and Wu, Y., 2022. Information theory: From coding to learning. --------- **Q6. Could the authors elaborate on the group fairness achieved by their strategy vs the other baselines in the empirical analysis?** A6. (Please see response to Q1 as well). In all of our plots, the x-axis corresponds to a group fairness metric. For example, in Figure 2, the baseline models are represented by the red curves. The baseline model is a logistic regression classifier combined with the DispMistreatment fairness intervention [58] and FairProjection [2] on the left and right, respectively, both using zero imputation to handle missing values. Their fairness-accuracy trade-off is pareto-dominated by the proposed methods for preserving information about missing values. We observe a similar pattern, i.e. the proposed methods pareto-dominating the fairness-accuracy plot, across all experiments. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response! I looked at the discussion with other reviewers as well, and am satisfied with the revisions the authors will make. I will increase my score to a 7. --- Reply to Comment 1.1.1: Comment: Thanks so much for your response. We are glad to know that you are satisfied with our response. We will make sure to include the promised changes in the revised paper. Thank you again for the insightful and constructive comments you provided.
Summary: The paper works on the missing value issues in algorithmic fairness. Typical approaches tend to firstly impute the missing the data, then process for the classification task. However, the authors prove that the imputed data harms the group fairness as well as the averaged accuracy. To avoid losing missing pattern of the data to be imputed, the authors propose to modify the dataset to preserve the feature within the missing patterns, then continue with an off-the-shelf fairness-intervention algorithm to the modified dataset. Experiment results show that the proposed adaptive algorithm improves fairness and accuracy over impute-then-classify methods. Strengths: * Theoretical illustration on the conclusion that ``imputed data harms the group fairness as well as the averaged accuracy". * The authors propose three methods for adapting linear fair classifiers to missing values: * **Method 1:** Adding missing indicator variables $\to$ this adaptive algorithm improves the accuracy of classifier under the same group fairness constraint than a classifier trained using impute-then-classify; * **Method 2:** Affinely adaptive classification; * **Method 3:** Missing pattern clustering; * A general algorithm for nonlinear classifiers. * Experiments on various datasets demonstrate the effectiveness of the proposed methods. Weaknesses: * (1) Although the overall presentation is well-done, some parts could be much better if modified accordingly (please refer to **questions**). * (2) Although the motivation of Theorem 1 is good, when I was going through the proof and assumptions made in Theorem 1, I feel like the assumptions are too strong, i.e., in this example, the conclusion should be based on: * (2a) The feature $X$ is of only one dimension; (it would be much better if it could be assumed as a two dimension, since missing values in one dimension feels like completely missing of a feature; while in real-world scenario, missing features are also like to be the case where part of information is missing in a feature $x=[x_1, x_2]$, i.e., $x_2$ is missing). * (2b) The construction of the probability distribution $P_{S, X, Y}$: given the attribute is $S=s$, it seems that the authors are requiring $Y=1$ won't appear in non-missing $X$ (as specified: $\text{Pr}(Y=1, X=0|S=s)\text{Pr}(Y=1, X=1|S=s)=0$), and $Y=1$ appears only for missing values (as specified: $\text{Pr}(Y=1, X=\text{NA}|S=s)=\alpha_s, \text{Pr}(Y=1, X=\text{NA}|S=s)=0$). Although the example itself is correct, it is hard to believe whether the conclusion will remain the same in more complex scenario. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * (1) It would be much better if the saying of **missing values** could be explained at the beginning of the paper, since missing values may indicate many aspects, for example, missing labels $y$ of a sample $(x, y, z),$ or missing (hidden) attributes $z$, or missing instances $x$. * (2) The notation w.r.t. accuracy $\mathbb{I}$ is given without any explanations. * (3) Should there be any conditional independency between $\hat{X}$ and $Y$, in line 190? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Please refer to the section of **Weakness**. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the kind comments and the encouragement! --------- **Q1. Although the overall presentation is well-done, some parts could be much better if modified accordingly (please refer to questions).** A1. We appreciate your constructive feedback! Please find our detailed responses to your questions below, where we hope to adequately address all your concerns. --------- **Q2. Although the motivation of Theorem 1 is good, when I was going through the proof and assumptions made in Theorem 1, I feel like the assumptions are too strong.** A2. This is a great point! Regarding your (2a), Theorem 1 can indeed be generalized to a scenario where $X$ is made up of two variables $X_{obs}$ and $X_{ms}$. In this situation, $X_{obs}$ is always observed while $X_{ms}$ has missing values according to a certain probability. Under these setting, Theorem 1 remains valid; however, the mutual information is substituted by the conditional mutual information $I(M;Y|X_{obs})$. Regarding your (2b), our theorem relies on this assumption to maximize the dependency of the predicted label on the missing pattern. We could extend our results to a more general scenario where $Pr(Y=1,X=0|S=s)\neq 0$, but this might result in a looser upper bound in Theorem 1. --------- **Q3. It would be much better if the saying of missing values could be explained at the beginning of the paper, since missing values may indicate many aspects, for example, missing labels $y$ of a sample $(x,y,z)$ or missing (hidden) attributes $z$, or missing instances $x$.** A3. Thank you for the suggestion! In this paper, we focus on a specific setting where the input variables $x$ may have missing values. We will clarify this aspect in our abstract. --------- **Q4. The notation w.r.t. accuracy $\mathbb{I}$ is given without any explanations.** A4. Thanks for highlighting this issue. To clarify, $\mathbb{I}$ stands for the indicator function. It is defined as $\mathbb{I}(event) = 1$ if the event is true, otherwise, it is 0. We will add this definition at the beginning of Section 2. --------- **Q5. Should there be any conditional independence between $\hat{X}$ and $Y$ in line 190?** A5. You are indeed correct! The imputed variable $\hat{X}$ is derived by applying an imputation mechanism to $X$ without knowledge of $Y$ (since $Y$ is the predicted label), which yields the Markov chain: $Y–X–\hat{X}$, meaning that $\hat{X}$ and $Y$ are independent when conditioned on the observed value of $X$. In light of this, we use the data processing inequality, which allows us to arrive at the desired inequality: $I(Y; X) \geq I(Y; \hat{X})$. --- Rebuttal Comment 1.1: Comment: Thanks for the responses, my concerns are well addressed. Hence, I increased my rating from 6 to 7. --- Reply to Comment 1.1.1: Comment: Thanks so much for your response. We are glad to learn that your concerns have been addressed. Thank you again for the insightful and constructive comments you provided.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for taking the time and effort to review our paper! We are delighted to receive positive feedback for the key components of the paper; in particular, that: our information-theoretic result characterizing the limitation of impute-then-classify provides novel and meaningful theoretical backing for our work (Reviewers oma2, SEin), our proposed algorithms are effective and interpretable (Reviewers GNEv, RyBS), and that our work sufficiently novel and addresses a significant problem in fair ML (Reviewer ZFNz). The thoughtful feedback we received is even more appreciated in light of reviewers having several papers to handle in a short period of time. Below, we address the main points and questions raised by each reviewer and outline how we plan to update the paper accordingly; we will add the changes in the final version (both in the main text and appendix). Some common points shared by multiple reviewers are addressed below in this global response and are referred to accordingly in the responses to the individual reviewers. We would also appreciate it if you could acknowledge that you read the response and, if your concerns are addressed, if you would kindly consider raising your review score. We also welcome any additional feedback or suggestions that could further strengthen our paper and would be glad to hear from the reviewers. Thank you! --- **Q1. Reviewers RyBS and SEin mentioned extending our approach to related settings, such as (a) missing sensitive attributes and/or labels and (b) multiaccuracy and multicalibration notions of fairness.** A1. We thank the reviewers for raising these great points. (a) We first note that while missing input features is a challenge with regard to handling missing patterns, missing sensitive attributes and/or labels is a challenge with regard to evaluating (group) fairness metrics and incorporating fairness constraints into model training. Hence, we cannot directly apply existing work on missing sensitive attributes or labels to deal with our missing-input-feature problem setting, nor vice versa. However, one can easily combine our approach with existing work on missing sensitive attributes/labels to provide a complete story that can deal with any possible missing features. As an example, Yan et al. (2020) use a preprocessing method involving clustering and resampling to improve class balance when sensitive attributes are unknown, prior to training a classifier such as logistic regression. If there are missing values in non-sensitive attribute input features, we can apply our method by e.g. adding missing indicators after the preprocessing scheme, right before classifier training. The use of sensitive attributes when adapting non-linear fair classifiers (Section 5) is more complex as the sensitive attribute is explicitly used when drawing subsamples from the dataset; we make this limitation explicit in lines 355-358. (b) We recognize and appreciate the rigor in the definitions of multiaccuracy and multicalibration, as well as the growing literature surrounding it. In practice, any of the three methods in Section 4 can be used prior to applying a multiaccuracy/multicalibration fairness intervention since they do not depend on specific knowledge of the group attribute (as mentioned above) and only require adding more features/parameters (sections 4.1 and 4.2) or training separate fairness interventions on different part of the dataset based on missing value pattern (section 4.3). Adapting the method in Section 5 for multiaccuracy and multicalibration is in contrast non-trivial, since it is tailored to the group fairness setting where group attributes are in a discrete set. Consequently, it cannot be directly extended to the setting where groups are denoted by a finite-complexity (yet potentially uncountable) set of functions against which the error should be uncorrelated (multiaccuracy) or calibrated (multicalibration). Nevertheless, the current experiments in the paper support our main conclusion: not preserving missingess information (such as in the impute-then-classify approach) can hinder the performance of fairness interventions. References: Yan, S., Kao, H. T., & Ferrara, E. (2020, October). Fair class balancing: Enhancing model fairness without observing sensitive attributes.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work examines the impacts of missing values in data on fairness interventions, particularly in contrast to the commonly implemented "impute-then-classify" procedure for handling missing values. The authors present the following: - investigation of how missing values impact algorithmic fairness in the context of three main modes of missing data (missing completely at random, missing at random, and missing not at random); - a theorem capturing the performance gap between optimal solutions when employing a generic imputation mechanism vs. not when using equalized odds (information-theoretic result); - methods for adapting fairness-intervention algorithms to missing data, both for linear and non-linear settings; - empirical evaluation of their proposed methods. Their findings suggest that fairness intervention strategies benefit from the preservation of information encoded in the missingness of data in terms of group fairness and accuracy. Strengths: - Clear motivation of the problem setting, both in the introduction and recapping in the conclusion. - Overall, mathematical notation is quite clean and easy to follow. - The authors provide an interesting information-theoretic performance gap result under a general imputation mechanism in the context of classification accuracy and equalized odds fairness constraint. This result implies that following imputation, then classification will never perform better (in terms of accuracy and group fairness) than using the information encoded in missing features, and moreover will result sub-optimal performance due to information loss. - Multiple methods are presented to address missing values in the context of linear classification and one bagging-based method for nonlinear classification. These missing-value adaptation methods are flexible and can be used in conjunction with preexisting fairness-intervention algorithms (used in a black-box way). - The authors provide meaningful discussion of challenges and limitations in using their methods (determining choice of fairness intervention, which the adaptation methods depend on; sensitive groups/attributes not being known beforehand). They also effectively demonstrate the value of this research direction in the context of algorithmic fairness. - The authors provide implementation and hyperparameter details used in their experiments in the Appendix, supporting reproducibility (though this should be referenced in the main paper). Weaknesses: It's unclear what the trade-offs are between the three methods presented for linear classification. The figures comparing the performance between the methods against baselines are very hard to visually interpret, and there is insufficient discussion highlighting the performance differences between these and the baselines. It'd be helpful to further flesh out this section. It also makes it unclear what the value is in providing three methods for a more constrained and less practically-applicable setting given this presentation. Furthermore, it is unclear how one would determine which of the three linear methods one should use - the authors note "we believe that the best adaptive algorithm is not universal, and one should select the adaptive algorithm based on the distribution of the data". What suggestions do you have for the reader in doing this? Additional suggestions: - In alignment with a question provided in the Questions section, proofs in the Appendix would benefit in some places from more rationale between steps - readers may not necessarily share your same mental model or background, and this can help cognitive overhead on the reader. - Overall, the empirical results presented in the plots (Figures 1-3) are very hard to read and interpret, and are unfortunately quite inaccessible (font size, curve markers and overlap, overall size). The paper would benefit from making these more human-interpretable and by highlighting key takeaways (as stated above) and trade-offs. - Nit: Please state upfront in Section 4 the settings you provide algorithms for! Based on the organization of the paper, a reader would have to be motivated to get to Section 5 to uncover that you provide methods for linear and nonlinear settings. :] - Nit: please include references in the main paper to additional results/content in the Appendix throughout, i.e. proofs, additional experimental details, etc. - Minor nit: please include the year in citation references. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the proof of Theorem 1, I didn't follow the transition between lines 539 and 540, nor where q came from. It'd be helpful to explicitly add a note for arriving at a < 1/3. 2. What implications does Theorem 1 have in a less trivial/higher dimensional setting? The multiclass classification setting? 3. Please refer to the Weaknesses/Suggestions section for additional questions around the linear adaptation methods, results, and trade-offs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors provide meaningful discussion of challenges and limitations in using their methods, namely that the choice of fairness intervention is an important dependency and choice when utilizing the adaptation methods, along with the challenges introduced when sensitive groups/attributes are not known ahead of time and need to be inferred in an online manner. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and for appreciating the novelty and value of the work! --- **Q1. Trade-offs between the three methods for linear classifiers.** Please refer to our response for Q2 below. **Q2. It is unclear how one would determine which of the three linear methods one should use - the authors note "we believe that the best adaptive algorithm is not universal, and one should select the adaptive algorithm based on the distribution of the data". What suggestions do you have for the reader in doing this?** A2. We thank the reviewer for raising this point. We introduced three adaptive algorithms for linear classifiers since, across our experiments in the main text and the appendix, no single method consistently dominates fairness-accuracy performance. Our suggestion is for users to consider the choice of intervention as a hyperparameter and select based on a validation set. We briefly summarize the trade-offs between these methods in terms of complexity and performance [this discussion will be added to the updated paper]: - Adding missing value indicators [Sec 4.1] is the simplest intervention, which is arguably its main appeal. The advantage of this approach in linear models is that model weights assigned to missing-value indicators may enable users to interpret (to the extent that linear models are interpretable) how missingness is incorporated in classification. Adding missing indicators can achieve comparable performance to the other two methods (cf. Fig 2, 7), though tends to perform worse on average. - Affinely adaptive classification [Sec 4.2] is more flexible than adding missing value indicators, with the drawback of requiring additional parameters in the model (see lines 204-207). This method performs competitively with adding missing indicator values and can achieve higher accuracy (cf. Fig 2, 7). However, the user should weigh the benefit of a potential gain in accuracy given against the additional complexity cost. - Missing pattern clustering [Sec 4.3] can achieve the best fairness-accuracy performance if the missing value patterns can be clustered such that applying separate fairness interventions on each cluster is advantageous. Note that here the classifier is linear per cluster, but not as a whole. Figure 6 in the appendix displays experiments on synthetic data where missing pattern clustering achieves a far better accuracy-fairness operation point. Appendix E.1 explains why this performance is observed. This gain in performance was more muted in the experiments in Figure 1 and 2. Ultimately, missing pattern clustering should be preferred when sufficient data is available to cluster missing patterns. --- **Q3. Providing more rationale between proof steps.** A3. Absolutely, that's an excellent suggestion! We'll certainly expand on the proofs in the Appendix, providing a more detailed explanation as well as offering intuitive insight. For additional information, please refer to our responses to Q8. --- **Q4. Interpretability and accessibility of plots (Figures 1-3).** A4. Please see above for a discussion of the trade-offs between different proposed methods. We will provide updated figures and captions to improve accessibility and interpretability in the revised paper. --- **Q5-7. Nits and minor nits stated in Weaknesses.** A5-7. Thank you for raising these important clarification points – we will update the manuscript addressing all the nits and minor nits. We will add additional pointers to the appendix in the main text. --- **Q8. In the proof of Theorem 1, I didn't follow the transition between lines 539 and 540, nor where q came from. It'd be helpful to explicitly add a note for arriving at a < 1/3.** A8. Thank you for carefully reviewing the proof. We provide additional details below which will be incorporated into the revised paper. By our constructed data distribution $P_{S,X,Y}$ in lines 531-532, we know $$ Pr(Y=1, S=0) = (Pr(Y=1, X=0 | S=0) + Pr(Y=1, X=1 | S=0) + Pr(Y=1, X=NA | S=0)) \cdot Pr(S=0) = \alpha_0 q_0 $$ where $q_0$ denotes $Pr(S=0)$. Similarly, we have $$ Pr(Y=1, S=1) = \alpha_1 q_1, Pr(Y=0, S=0) = (1-\alpha_0) q_0, Pr(Y=0, S=1) = (1-\alpha_1) q_1. $$ Now we have $$ Pr(\hat{Y} = Y) = Pr(\hat{Y} = 1| Y=1, S=0) Pr(Y=1, S=0) + Pr(\hat{Y} = 1| Y=1, S=1) Pr(Y=1, S=1) + Pr(\hat{Y} = 0 | Y=0, S=0) Pr(Y=0, S=0) + Pr(\hat{Y} = 0 | Y=0, S=1) Pr(Y=0, S=1) = (1-p_1) (\alpha_0 q_0 + \alpha_1 q_1) + \frac{p_0 + p_1}{2} \cdot ((1-\alpha_0) q_0 + (1-\alpha_1) q_1) $$ where the last step uses the equations between lines 538 and 539, coupled with the equations we elaborated above. Lastly, note that $q_0 + q_1 = 1$. This leads us to the desired equation. Regarding $\alpha < 1/3$, we apply it in the final step of the equation from line 540 to 541. Given this condition, we have $1-\alpha > 0$ and $1-3\alpha > 0$. As a result, the objective function reaches its maximum when $p_0$ and $p_1$ are both equal to 1. --- **Q9. What implications does Theorem 1 have in a less trivial/higher dimensional setting? The multiclass classification setting?** A9. Yes, Theorem 1 can be extended to the multi-class setting. It can also be extended to the setting where $X$ is composed by two variables $X_{obs}$ and $X_{ms}$ where $X_{obs}$ is always observed and $X_{ms}$ has missing values according to a certain probability. In this case, the statement of Theorem 1 still holds while the mutual information is replaced by the conditional mutual information $I(M;Y|X_{obs})$. --- **Q10. Please refer to the Weaknesses/Suggestions section for additional questions around the linear adaptation methods, results, and trade-offs.** A10. We hope our response has addressed all of your comments. Please feel free to let us know if you have any additional comments that can help us further improve the paper.
null
null
null
null
null
null
Data Curation for Image Captioning with Text-to-Image Generative Models
Reject
Summary: This paper focuses on data curation for image captioning. This paper shows that mismatched image-caption pairs do harm to the captioning model. To address this problem, generative models are used. In detail, the BLIP model is used to generate captions based on images, and the Stable Diffusion model is used to create images based on captions. Strengths: 1. The data curation is an important and effective topic, which could benefit many tasks including visual synthesis, image captioning, language and visual representation, etc. 2. This paper discovers the weak point of captioning datasets, especially for the Flicker30K. 3. It is interesting to use the BLIP model and the Stable Diffusion model to create data for training. Weaknesses: 1. There are many methods to augment text dates, e.g., adding or editing some words, using synonyms, and changing sentence structure. I think these methods are also worth evaluating. 2. From Table 2, we can see that the BLIP's performance is not significantly affected by the methods proposed in this paper (i.e., Remove, ReplaceCap, and ReplaceImg). For example, the CIDEr of COCO only slightly raises from 132.0 to 133.1. 3. Figure 5 shows that the proposed methods might make the performance worse, especially in the COCO dataset. So I am concerned about the generalization of the proposed methods. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Both the BLIP model and the Stable Diffusion model use a large size of datasets (e.g., LAION-5B). Would the performance of Image Captioning be improved by using part of the data in LAION-5B? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Evaluation of text augmentation methods Yes there are many possible text augmentation methods, which mainly involve text augmentation which increases the total number training samples, such as [[3]](https://www.mdpi.com/2076-3417/10/17/5978). Instead, we focus on the new approach that leverages the text-to-image models to dynamically curate existing images without scaling training data. --- > Would the performance of Image Captioning be improved by using part of the data in LAION-5B? Yes, we believe that this is possible with some filtering mechanism, as BLIP gained its performance improvement with the CapFilt procedure during pretraining. We are not sure if the finetuning performance can be further improved with the data already seen in the pretraining stage (though BLIP only uses part of LAION-5B). --- > BLIP's performance is not significantly affected by the methods proposed. For example, the CIDEr of COCO only slightly raises from 132.0 to 133.1. Finetuned with our curation methods, BLIP reaches 95.8 CIDEr score with a +3 CIDEr score increase, which is a state-of-the-art performance on Flickr30K image captioning. Though the absolute improvement of CIDEr for COCO may seem small, this is already comparable to the BLIP_CapFilt model (133.3 CIDEr), whereas the CapFilt technique is applied during pretraining and would require more computation resource. --- > generalization of the proposed methods Please kindly see our general response for the generalizaiton ability of our approach. --- Rebuttal Comment 1.1: Comment: Thanks for the author's elaborate response, and most of my concerns have been well addressed. --- Reply to Comment 1.1.1: Comment: Thank you! We are glad to hear that our response did a good job of addressing most of your concerns. Does it change your final rating for our submission?
Summary: This paper studies data curation strategies for training image captioning models. Firstly, it identifies the “difficult samples” based on the captioning loss dynamically at the end of each epoch. Subsequently, it introduces three data curation strategies to modify the difficult samples: (1) removal of an image-text pair, (2) replacement of the caption and (3) replacement of the image using text-to-image generative models. The main technical innovation is the third strategy, which is carefully designed in terms of prompt engineering and fine-tuning on the image captioning datasets. The empirical studies show that the proposed data curation strategies can enhance the performance of the baseline BLIP captioning model. The authors also conduct analysis on the data curation ratio, dynamic versus static curation strategy and the errors of images generated by the stable diffusion model. Strengths: * The idea of employing text-to-image generative models to curate training data for image captioning is novel and well-motivated. Weaknesses: ### Effectiveness of the proposal * According to Table 2, the performance of the third data curation strategy, which is the main technical innovation of this work, is not advantageous compared to the heuristic removal and caption replacement strategies. * According to Figure 5, all three proposed strategies are sensitive to data curation ratio. Consequently, training the captioning model multiple times is necessary to achieve satisfactory performance, which is less efficient compared to the baseline BLIP model. ### Design of the method * Identifying the samples to modify based on training loss is questionable. A higher loss does not necessarily imply that the sample is harmful to training. Although Section 5.2 has shown that more errors are identified in images of higher loss, the experimental setup has two issues: (1) The loss is computed over the generated images rather than the real images in the original dataset. (2) The errors are categorized as targeting image generation, rather than image captioning. In other words, an image that possesses imperfect visual quality but aligns well with the caption may not necessarily be considered a noisy training sample for image captioning. ### Missing reference * An idea similar to the “round-trip captioning evaluation” is already proposed by [1], which generates a caption from the synthesized image and measures the similarity between input text and predicted caption. ### Clarity * In line 243, it is unclear whether the “model loss” refers to captioning loss or image generation loss. [1] Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have discussed the limitations of this work from three aspects: (1) Lack of adaptation of the proposal to the pre-training stage. (2) Reliance on pre-trained image understanding and text-to-video generative models. (3) Increase in training time due to the usage of text-to-image generative model. Moreover, a significant limitation of this study is the absence of evidence demonstrating the superior effectiveness of the generated images compared to heuristic data curation strategies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Effectiveness of the proposed ReplaceImg method In Figure 5, we show that ReplaceImg generally works better for both datasets (second best for Flickr30k---0.1 CIDEr lower than REMOVE, and best for COCO). Flickr30k benefits more from removing high-loss training samples indicates the original dataset maybe noisier than COCO (Figure 6 and L211). This is more specific to the dataset instead of the general image captioning task or our curation method. --- > Limitation: lack of adapting the approach to pretraining We agree with the reviewer that our approach could be adapted to pretraining vision-language models to learn better general visual representations, and more applications. However, that would require substantially more computational resources than the present paper. --- > sensitivity to data curation ratio: Do we need to train the model for multiple times for the curation method to work? We conducted additional experiments to show that the curation approach is model-agnostic and the ratio is transferable. We evaluated our curation methods on another state-of-the-art VL model---BEiT-3 by applying exactly the same curation ratio from BLIP and obtained similar improvements. Please kindly see our general response and the attached PDF for detailed experiment results. --- > Identifying the samples to modify based on training loss is questionable. A higher loss does not necessarily imply that the sample is harmful to training. The use of loss values to separate difficult samples have been discussed and utilized in previous literatures including Curriculum Learning and Self-paced Learning. Please kindly see our general response for more details. And to clarify, we didn't use the training loss, as in L82-86, we use the model checkpoint after each epoch to evaluate on the training samples and use the loss as the indicator of the sample difficulty. --- > Regarding section 5.2: (1) The loss is computed over the generated images rather than the real images in the original dataset. (2) The errors are categorized as targeting image generation, rather than image captioning. 1) To clarify, Secontion 5.2 serves to point out limitations in the synthesized images (L240) instead of the issues in the original dataset. Please refer to the examples of the high loss samples in the original training dataset in the Appendix Figure 2. Our ReplaceImg approach replace images of the high loss samples dynamically regardless of whether the image belongs to the original dataset. A low-quality synthesized image of high loss will also be replaced in the following training epoch. 2) The errors in the synthesized images are identified by human annotators given reference captions. Please see our annotation interface in Appendix Figure 1. During the study, the annotators are asked to judge the quality of the synthesized image in the context of whether the image has issues to match the reference captions. ---- > Clarity: model loss in L243 The model loss refer to the captioning loss in L243, we will make it more clear in the revised version. We thank the reviewer for pointing out the missed reference, and we will make sure to update our revised version. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thank you for taking the time to respond to my comments. However, my major concerns still remain and I prefer to keep the rating unchanged. ### Effectiveness of the proposed ReplaceImg method While ReplaceImg performs the best in COCO and the second best in Flickr30K, the advantage over the heuristic data curation strategies is marginal (-0.1 CIDEr score than Remove in Flickr30K, +0.4 CIDEr score than ReplaceCap in COCO). ### Sensitivity to data curation ratio * The additional results of BEiT-3 are still insufficient to demonstrate the curation ratio is generalizable to different VL models. * As for the cross-domain evaluation, the additional results show that the best curation ratio in COCO (10%) fails to improve over the baseline BLIP and BEiT3 in the COCO -> Flickr30K setting. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading through our response and providing the feedback! > The additional results of BEiT-3 are still insufficient to demonstrate the curation ratio is generalizable to different VL models. Can you explain why showing that it directly translates to another model is insufficient? > Cross-domain evaluation We think the reviewer's ask for improved cross-domain performance would be more relevant to pretrained models instead of the finetuned models we present in the paper. As widely known as the impossible triangle [1], finetuned models often struggle with OOD generalization [2]. Nevertheless, we provide cross-domain evaluation results as we consider it as an interesting question. And we are glad to see that our curation method does not hurt cross-domain performance (COCO-> Flickr30K), and has gained significant improvements (+9.6 CIDEr score increase) compared to the standard finetuned model when transferring from Flickr30K to COCO. This already shows the model finetuned with our curation method has good enough cross-domain performance. [1] Zhu, C., & Zeng, M. 2022. Impossible Triangle: What's Next for Pre-trained Language Models? [2] Aishwarya Agrawal, Ivana Kajic, Emanuele Bugliarello, Elnaz Davoodi, Anita Gergely, Phil Blunsom, and Aida Nematzadeh. 2023. Reassessing Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization
Summary: This paper proposes a data curation model for image captioning. If the loss of a particular image caption pair is high, then either remove the image-caption pair from the training set or replace the caption with a more similar caption or they generate a new image for the difficult caption. The authors demonstrate these strategies help to improve the performance of BLIP caption generation model. Strengths: The idea is interesting. Paper shows some positive gains on COCO and FLickr30K. Weaknesses: The details of the method are not clear. How to select a replacement caption? Why one should pick only the high-loss image-caption pairs? Loss may be high due to many other reasons. Compared to other data augmentation methods in the literature that is also discussed in the related work, what is the novelty? Why this is a significant finding? I am not sure if this is a significant finding. The method is also evaluated using a single model. Obtained results are not state-of-the-art. It is not clear whether such a mechanism will contribute to any state-of-the-art methods in captioning. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: What are the conceptual differences w.r.t. [3]? W.r.t [3] is this paper novel? I am not able to understand Figure 2 and the message behind this figure. No guarantee synthesized image has a smaller loss than the original image. It is not clear to me what is F in Table 1. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: Limitations and societal impact are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > How to select a replacement caption? This is described on L98-102. As for both Flickr30K and COCO, each image is paired with 5 caption annotations. We replace the caption by randomly selecting from the other 4 captions. --- > Significance of our findings We propose a model-agnostic approach to dynamically updating an image captioning dataset while it is being used to train an image captioning model. With the proposed method, we show state-of-the-art image captioning models can be improved by curating on existing resources. --- > The method is evaluated using a single model We conduct additional experiments to evaluate our curation methods on another state-of-the-art VL model BEiT-3. We used exactly the same replacement rates from the BLIP model. The results show that we obtained a similar improvements in performance by directly applying those replacement rates, i.e. 3 CIDEr points improvements on Flickr30k with ReplaceImg (40%) and 0.7 CIDEr points on COCO with ReplaceImg (2std). The curation is more effective on Flickr30K, which may be because COCO is included in the BEiT-3 pretraining data. Please see the detailed results in Table 1 in the rebuttal PDF. | BEiT-3 | B4 | CIDEr | |---------------------|---------:|----------:| | Flickr30K | 28.9 | 79.3 | | +ReplaceImg (40%) | **32.0** | **82.4** | | COCO | 39.4 | 133.7 | | +ReplaceImg (2 std) | **39.6** | **134.4** | --- > Difference to [3] There are several differences. First, [3] proposes to perform data augmentation to the captions, whereas we perform data augmentation of the images. Second, [3] performs the data augmentation as a pre-processing step, whereas our data curation happens dynamically as the model is trained. Third, [3] reports their final results after additional fine-tuning with SCST whereas we only use cross-entropy based training of the model. Finally, [3] augmented the training dataset to 2-3 times larger by augmenting the image captions, whereas we show improved performance by curating on the existing dataset without increasing the total number of unique training examples. In other words, it is an in-place augmentation. --- > Figure 2 This is covered in Section 3.1 and especially L91-94. Together with Figure 8, we analyze how the curation methods impact the loss distribution of training samples, and help with the model training process. --- > No guarantee synthesized image has a smaller loss than the original image. There is indeed no guarantee that the synthesized image will result in improving the quality of the model. . It is possible that the synthesized image has a larger loss than the original image. However, as we dynamically curate the dataset during the training process (L38 and Figure 7), the synthesized images/captions that have high losses would be replaced in the following training process if they prove to be be in the tail of the distribution of training losses. --- > F in Table 1 F in Table 1 indicates that we finetune the stable diffusion model. Thanks for pointing out the issue, we will make it more clear in the revised version. --- > Why one should pick only the high-loss image-caption pairs? We use the loss values to identify the difficult samples during training (Section 3.1). The use of loss values to separate difficult samples have been discussed and utilized in previous literatures including Curriculum Learning [1] and Self-paced Learning [2]. Please see the general response for more details on the effectiveness of dynamic loss-value based curation. [1] Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. ACM, 2009. [2] M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189–1197, 2010. --- Rebuttal Comment 1.1: Title: Please discuss Comment: Dear ymVw, Thanks for your review! Are your main concerns addressed by the rebuttal? Also, you explicitly mentioned that the results are not state-of-the-art. Could you please provide explicit references to methods that perform better? Best, SAC --- Rebuttal Comment 1.2: Comment: Thanks for the response. I am not still convinced that the use of loss value is the right approach. Loss is an effect of many causal elements. After reading other reviews and all responses, I will keep my original rating.
Summary: This paper focuses on improving image captioning by improving the quality of the existing dataset. To this end, this paper proposes three data curation methods: the removal of an image–caption sample; replacing a caption with another caption; and replacing images using a text-to-image generation model. Experimental results demonstrate that models trained with the proposed methods consistently outperform baselines. Strengths: 1.The proposed method is well-motivated, that is to improve the quality of the existing dataset. This paper explores the problem of making better use of existing datasets, which is a very interesting research direction. 2.The authors conduct extensive experiments over these two datasets, where the models trained with the proposed methods outperform baseline methods consistently. 3.The paper is well-written and easy to follow. Weaknesses: 1.From my understanding, it is risky to judge the quality of the sample based on the loss value. A sample with a large loss value may be a hard sample or a mislabeled sample, so it is risky to judge the sample quality only based on the loss value. 2.In addition to the performance of the model on the test set, the generalization ability of the model is also important. It is not clear whether the proposed method reduces the gap between the training set and the test set or improves the quality of the training set. 3.Lack of necessary theoretical analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the proposed method improve the generalization ability of the model? It would be better to carry out cross-domain testing to verify the generalization ability of the model, that is, the Flickr30K dataset is used as the training set, and the test set of the COCO dataset is used as the test set. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Generalization ability and cross domain evaluation It had never occurred to us that our data curation method would reduce the gap between the training and the test set because nothing in the method knows anything about the distribution of the test data. In order to better understand how our method contributes to generalization, we adopt your suggestion and conduct two additional experiments. Please kindly see the cross-domain evaluation results and more details about strong generalization ability of our approach in the general response. --- > It is risky to judge the quality of the sample based on the loss value Please kindly see the general response for this question. --- > Lack of necessary theoretical analysis We are not sure how to interpret this comment. Which type of theoretical analysis do you want to see in the paper? --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. In general, theoretical analysis refer to why and how the proposed method works.
Rebuttal 1: Rebuttal: We express our gratitude to all the reviewers for their time and helpful feedback. We are glad that all five reviewers found our work interesting and well-motivated, and `Reviewer-ndRn`, `Reviewer-Hsed` and `Reviewer-tP6L` also found our work enlightening to a broader scope of Vision-Language learning and useful for the community. We address two shared concerns below. > Identifying the samples as target of curation based on loss values (`Reviewer-Hsed`, `Reviewer-9L7M`) The use of loss values to prevent difficult samples from confusing the model have been discussed and utilized in previous literature including Curriculum Learning [1] and Self-paced Learning [2]. Though we agree that there might be more advanced influence evaluations to judge the quality of the training samples, our experiments show that by curating on the samples that have outlier losses is sufficient to improve the downstream performance. High loss indicates the model is predicting the wrong probability distributions over the expected tokens given the image, i.e. struggles to generate a caption that is similar to the reference. In Figure 7, we show the empirical effectiveness of replacing the images based on the loss criterion dynamically, instead of randomly replacing the same amount of images. It is clear that the dynamic replacement of examples based on tracking the losses is always better than randomly replacing images. (The dashed-diamond line is always higher than the solid-circle line.) Qualitative examples in Figure 2 in the appendix also validates the effectiveness as the high-loss samples often have an overly specific caption that would be difficult for the model to learn/generate. [1] Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. ACM, 2009. [2] M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189–1197, 2010. --- > Generalization ability of the curation approach (`Reviewer-Hsed`) In order to better understand how our method contributes to generalization, we conduct two different experiments. First, we find that our curation approach is **generalizable to different VL models**. We evaluated our curation methods on another state-of-the-art VL model---BEiT-3 to see if our approach transfers. More specifically, we used exactly the same replacement rates from the BLIP model. The results show that we obtained a similar improvements in performance by directly applying those replacement rates, i.e. 3 CIDEr points improvements on Flickr30k with ReplaceImg (40%) and 0.7 CIDEr points on COCO with ReplaceImg (2std). The curation is more effective on Flickr30K, which may be because COCO is included in the BEiT-3 pretraining data. | BEiT-3 | B4 | CIDEr | |---------------------|---------:|----------:| | Flickr30K | 28.9 | 79.3 | | +ReplaceImg (40%) | **32.0** | **82.4** | | COCO | 39.4 | 133.7 | | +ReplaceImg (2 std) | **39.6** | **134.4** | Second, we conduct a **cross-domain evaluation** to determine if a model finetuned with our curation method has a stronger cross-domain generalization ability. We use the best performing BLIP model finetuned on Flickr30K (40% ReplaceImg curation), and evaluate on the COCO. We obtained +0.5 points BLEU score increase and a +2 CIDEr score increase compared to the standard finetuned model (no curation) on the COCO test set. For the BEiT-3 model, We obtained +3 points BLEU score increase and a +9.6 CIDEr score increase. The performance remains the same if finetuned on COCO and evaluated on Flickr30K. This concludes that the model trained with our curation method also has stronger generalization ability. | Flickr30k -> COCO | B4 | CIDEr | |------------------------|---------:|----------:| | BLIP | 31.8 | 108.2 | | +ReplaceImg (40%) | **32.3** | **110.2** | | BEiT-3 | 21.0 | 76.4 | | +ReplaceImg (40%) | **24.0** | **85.0** | | COCO -> Flickr30K | B4 | CIDEr | |------------------------|-----:|------:| | BLIP | 25.6 | 67.9 | | +ReplaceImg (10%) | 25.6 | 67.8 | | BEiT-3 | 25.5 | 67.0 | | +ReplaceImg (10%) | 24.7 | 66.9 | More detailed experiment results are in the Table 1 and Table 2 in the pdf attached below. Pdf: /pdf/e4111ac27296d7ea5c9a9674501c8db8e06e20df.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose an iterative training approach to improve image captioning models. This approach _refreshes_ the training dataset every epoch with _higher quality_ image-text pairs (authors call it "data curation"). Dataset samples with very high training loss are updated -- the real image is replaced with one generated by the Stable Diffusion model. The authors compare their approach with two baselines: one which removes high-loss samples, and one where the image is replaced by another from the training dataset itself. Experiments are performed with the BLIP model and two captioning datasets -- COCO and Flickr30K. Authors also perform an accompanying human study to provide directions for future work. Strengths: This paper has numerous technical strengths: - The proposed method is conceptually simple and easy to implement. - The strategy of updating the training dataset with "better" samples is very general: it is agnostic to the model architecture and the multi-modal task at hand. - The writing and presentation quality of the paper is excellent. It contains adequate implementation details to make this work reproducible. - The experimental setup and ablation study is very meticulous. Tables of results contain experiments that begin with a BLIP baseline, and subsequent rows introduce one change at a time. - The authors have conducted a human study with sensibly defined failure categories to understand how failure modes of Stable Diffusion can impact captioning performance. Weaknesses: Like its technical strengths, this paper also has some shortcomings. Below I list a few salient concerns with the paper. I look forward to hearing the authors' response, and I am happy to update my final assessment. 1. **Results do not match with the presented story:** The main results (`Table 2`) indicate that all considered dataset curation approaches are beneficial over a BLIP baseline that doesn't train on curated data. However, the main pitch of this paper is to use generative models like Stable Diffusion to replace images (last row), which in fact performs marginally better or even worse than other curation techniques. The biggest improvements are generally yielded by "Remove" strategy. I recommend the authors rethink the positioning of motivation and frame it as an exploratory study -- it seems obvious to use generative models for iterative training/distillation and some works already do it for other applications, but for this task, a practitioner is better off by simply filtering noisy samples altogether. 2. **Captioning metrics appear saturated, maybe overkill for COCO/Flickr:** The captioning metrics on COCO and Flickr are already saturated, e.g. decimal improvements are less meaningful for COCO in the range of 130+ CIDEr and 20+ SPICE score. Since BLIP is already rained with large amounts of data and diverse tasks, the proposed approach may be an overkill for the tasks considered in this paper. I suggest the authors rethink other applications where the benefits of this strategy are more prominently observed (see Weakness 5 below). 3. **What if the caption is noisy and can't generate meaningful images?** An image-text pair may be unaligned if the caption is uninformative, as frequently encountered in larger web datasets like [Conceptual Captions](https://arxiv.org/abs/2102.08981), [YFCC](https://arxiv.org/abs/1503.01817), [RedCaps](https://arxiv.org/abs/2111.11431), etc. For instance, captions coming from alt-text may not have any semantic content whatsoever (e.g. see Figure 2 in [ALIGN paper](https://arxiv.org/abs/2102.05918)) to generate meaningful images. The proposed approach forces the generative model to create an arbitrary image and ends up adding noise to the training data. Some selective mechanisms to replace either image or caption may be needed to scale this approach to general image captioning beyond COCO and Flickr30K. 4. **Related work needs more coverage:** The main focus of this paper is image captioning, hence a broad coverage of prior works on image captioning is necessary. However, this section only cites a handful of very recent modeling papers. I suggest the authors begin the discussion with some early image captioning papers like: - (Vinyals et al, CVPR 2015) Show and tell: A neural image caption generator - (Karpathy and Li, CVPR 2015) Deep visual-semantic alignments for generating image descriptions - (Donahue et al, CVPR 2015) Long-term recurrent convolutional networks for visual recognition and description 5. **[Related to 1, 2] Have the authors considered applications others than image captioning?** What if this curation strategy is used to train general visual representations? I suggest a CLIP-style contrastive model and/or BLIP/VirTex-style generative model. The contribution can be strengthened by broadening the scope to various downstream tasks. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Some comments and suggestions: - Imagen is cited twice (44 and 45). Please remove the duplicate. - Related work section has phrases like "large-scale stable diffusion models" (`Line 64`) and "stable diffusion text-to-image models" (`Line 69`). "Stable Diffusion" is a set of models developed by Stability AI startup, and these phrases seem to appropriate a brand as a mathematical/conceptual term. I suggest the authors remove "stable" from these phrases and call them "text-to-image diffusion/generative models" or something. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The limitations section should be updated if any of the above-mentioned open questions are not within the scope of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and recognising the contribution of our work and its potential impact to the community. > Results do not match with the presented story: simple removal works better than ReplaceImg. Figure 5 gives a broader context than Table 2, where it can be seen that ReplaceImg yields more reliable improvements for the COCO dataset. Flickr30k marginally benefits more from removing high-loss training samples, which may indicate that the Flickr30K dataset is noisier than COCO (Figure 6 and L211). This is more specific to the dataset instead of the general image captioning task. For example, if we measure the overlap of tokens in the training and validation sets, an admittedly crude measure of similarity, we observe that, for Flickr30k, REMOVE improves the vocabulary similarity between train and test datasets by 15%, while the similarity improves 6% for COCO. > Captioning metrics appear saturated, maybe overkill for COCO/Flickr, rethink applications including learning general visual representations, etc. where the benefits of this strategy are more prominently observed We agree with the reviewer that our approach can be adapted to pretraining vision-language models to learn better general visual representations, and more applications, when computation budgets allow (L276-280). This is an exciting direction for future work that we will highlight in the paper. In the submitted paper, with our computational resources, we show that our proposed curation methods are beneficial and can be built upon existing state-of-the-art models. > What if the caption is noisy and can't generate meaningful images? Interesting question! As you wrote, this becomes more relevant if you were to apply the ReplaceImg method to larger-scale noisy datasets for pretraining. For our experiments on the Flickr30K and COCO datasets, we improved the expected generated image relevance by selecting the best prompting method from our round-trip captioning evaluation(L162-166). We reduce the impact of a noisy caption by concatenating all five captions and adding the styler. Please see Figure 1 in the rebuttal pdf for qualitative examples on the generated images. If we were to apply this to pretraining, it might be necessary to have an additional classifier that could determine whether a sentence was visually descriptive [[1]](https://aclanthology.org/W15-2805.pdf) or had a high expected semantic content. > Related work and phrasing suggestions We thank the reviewer for the detailed suggestions, and we will make sure to update the corresponding sections in the revised version. [1] Robert Gaizauskas, Josiah Wang, and Arnau Ramisa. 2015. Defining Visually Descriptive Language. In Proceedings of the Fourth Workshop on Vision and Language, pages 10–17, Lisbon, Portugal. Association for Computational Linguistics. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I thank the authors for a thoughtful rebuttal. I listed several concerns in my initial review, grouped into five bullet points. Below I provide pointwise responses outlining how far these concerns were addressed in the rebuttal: 1. **Results do not match with the presented story:** I am not entirely convinced by the response. This concern was raised by multiple reviewers, and (in my response) the authors restate their results, specifically `Figure 5` in the paper. I understand what the authors are trying to convey, but I was trying to make a broader commentary about the positioning of this paper. The authors present the story as "our curation method improves captioning performance", but the empirical results fall short to support this story — their approach works just as well as a baseline that removes noisy samples. In the past, the typical "minimum publishable unit" that advances SOTA on captioning benchmarks has had more gains (on CIDEr/SPICE metrics) than in this paper. For reference, authors can see a few papers on [Papers with code](https://paperswithcode.com/sota/image-captioning-on-coco-captions) and discount concurrent works published in the same conference venues. So in my opinion, the authors are competing on a losing ground and minimizing the impact of their work if they write "our curation method improves captioning" with the presented results. I suggested rethinking the positioning, to present a more exploratory study — it seems very obvious to readers that expanding the training data using diffusion models may be a "free lunch", but performance improvements are marginal in reality. Such a story may be an important piece of evidence that helps ML practitioners to calibrate their expectations on what these generative models can and cannot do. In fact, the human study presented in this paper would align well with an exploratory story. However, the authors do not comment on this suggestion. Below are recent examples in vision-language learning that authors may refer to consider how the presented results can be conveyed differently: (a) "The Curse of Recursion: Training on Generated Data Makes Models Forget" https://arxiv.org/abs/2305.1749 and (b) "Masked Autoencoding Does Not Help Natural Language Supervision at Scale" https://arxiv.org/abs/2301.07836 2. **Captioning metrics appear saturated, maybe overkill for COCO/Flickr, and (weakness 5) Have the authors considered applications other than image captioning:** The authors do not acknowledge the first half of my concern (+0.4 CIDEr on COCO is less meaningful when metrics are so saturated). They state limited computational resources as a reason to not explore other benchmarks and vision-language pretaining tasks. This argument is not convincing — the authors claim to use 4x A100 GPUs for experiments in the paper. In my opinion, this scale is sufficient to perform controlled comparisons with small-scale generative and contrastive vision-language models (e.g. CLIP with a ResNet-50/ViT-base and up to 4K batch size, or BLIP/VirTex with similar model capacity/batch size). Limited computational resources are a valid reason to discount the lack of additional experiments during the rebuttal. However, my concern broadly encompasses all the experiments presented in the paper. This could be a second alternative apart from framing this paper as an exploratory study — the authors should consider searching for downstream applications where their curation strategy can show more prominent improvements. **(3) What if the caption is noisy and can't generate meaningful images? and (4) Related work needs more coverage:** These concerns are sufficiently addressed, thank you. **Summary:** All things are taken together, this paper requires careful thought on (a) either presenting existing results in a less biased and more analytical manner or (b) demonstrating downstream tasks where the proposed approach is significantly better than baselines. In my opinion, saying that the proposed curation strategy is "effective" for improving image captioning models is not entirely correct. Unfortunately I will have to keep my original rating unchanged. I encourage the authors to update the paper and consider a future venue. Good luck!
null
null
null
null
null
null
Subject-driven Text-to-Image Generation via Apprenticeship Learning
Accept (poster)
Summary: This paper proposes a method for subject-driven text-to-image generation, where a model is tasked to generate novel renditions of a subject given a few images of that subject. Different from previous fine-tuning approaches, this paper trains a model that conditions its generation on the given subject images. To train such a model, this paper first trains a large amount of subject-specific fine-tuned models, and use these fine-tuned models to generate new training data for knowledge distillation. The distilled model achieves good qualitative performance. Strengths: - It is nice that the apprenticeship model does not require finetuning on new subjects. It is an interesting idea to directly use subject images as the conditional input for generation. - The paper is mostly well-written. - The learned model achieves good qualitative results. Weaknesses: - The training data covers a wide range of subjects. Therefore, it is hard to tell if the apprenticeship model learns to generalize to new subject or not. Have the authors performed de-duplication to ensure that the subjects used for evaluation do not appear in the training set of the expert models? - The proposed method requires training of 2M expert models, where each expert is a 2.1B Imagen model. This is extremely expensive both computational-wise and storage-wise. - The inference speed of the apprenticeship model seems to be slow, as each demonstration sample needs to pass through the Imagen model. Methods such as DreamBooth do not incur additional inference cost after the finetuning. - It would be nice to see some ablation on the importance of the expert model. For each subject cluster, how many synthetic images are used to train the apprenticeship model? What if only the real images are used to train the apprenticeship model? - It would be better if the paper could give a more detailed illustration of the apprenticeship model's architecture, instead of referring to the ReImagen paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible to finetune the apprenticeship model on new subjects? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comment #1 “The training data covers a wide range of subjects. Therefore, it is hard to tell if the apprenticeship model learns to generalize to new subjects or not. Have the authors performed de-duplication to ensure that the subjects used for evaluation do not appear in the training set of the expert models?” We investigated whether the validated concepts appear in the training dataset. We use CLIP score to retrieve nearest cluster images for each of the 30 concepts in DreamBenchv2. We manually check whether these concepts appear in the training set. We found that very few subjects have similar variants in the training set, and these are mostly “dogs” and “cats”. Most other subjects like “Robot”, “Vase” are very unique and we can’t find even modestly similar ones. Comment #2 “The proposed method requires training of 2M expert models, where each expert is a 2.1B Imagen model. This is extremely expensive both computational-wise and storage-wise.” Our method is computationally expensive, but since we don’t store any DreamBooth checkpoints, our storage consumption is very low, i.e. only a few hundred Gigabytes. Comment #3 “The inference speed of the apprenticeship model seems to be slow, as each demonstration sample needs to pass through the Imagen model. Methods such as DreamBooth do not incur additional inference cost after the finetuning.” There are two solutions to address this issue: (1) SuTI is highly compatible with DreamBooth. We can use the tuned SuTI just like DreamBooth by setting #demonstration=0 during train/inference time, which would lead to the exact same performance. If we want to gain better results, we could provide a single demonstration to the subject-tuned-SuTI, which will yield much better subject fidelity than DreamBooth with only marginal inference overhead (15 secs vs 10 secs). (2) We can use distillation in https://arxiv.org/abs/2210.03142 to shorten the diffusion steps by 10x or even more, which can decrease the inference time significantly. Comment #4 “It would be nice to see some ablation on the importance of the expert model. For each subject cluster, how many synthetic images are used to train the apprenticeship model? What if only the real images are used to train the apprenticeship model?” Thanks for the suggestion. We will put more ablation studies to the paper revision. 1. We have experiments to show how the model performance changes w.r.t the number of clusters, and what’s the minimum number of clusters needed to train SuTI. 2. We also have conducted experiments to see whether we could just use the clustered images (without DreamBooth expert) to train SuTi. Specifically, we use k-1 images as exemplars and the k-th as the target. However, due to the fact that the clustered images are so similar or near duplicate to each other, this training will guide the model to only copy-paste from demonstration. That’s why we will need more diverse outputs from DreamBooth to discourage the copy-paste behavior. We will add more analysis about this discovery into Appendix. Comment #5: “It would be better if the paper could give a more detailed illustration of the apprenticeship model's architecture, instead of referring to the ReImagen paper.” Thanks for the suggestion. We will add detailed model architecture to the Appendix in the revision. Comment #6 “Is it possible to finetune the apprenticeship model on new subjects?” Yes! We found that fine-tuning on SuTI is better than fine-tuning on the original Imagen model by a significant margin, particularly in terms of the subject fidelity. We plan to include some of these results in the revision. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. Most of my concerns have been addressed, thus I will raise my score. Despite some weaknesses of this paper (such as its computational cost and close-sourced model), I still lean towards acceptance.
Summary: The paper presents a novel subject-driven text-to-image generator named SuTI. This model leverages in-context learning as opposed to subject-specific fine-tuning. SuTI is built upon the principles of apprenticeship learning and is capable of generating high-quality, customized, subject-specific images. Remarkably, it achieves this at a speed that is 20 times faster than optimization-based methods. SuTI has demonstrated superior performance over existing models on benchmark tests such as DreamBench and DreamBench-v2. The paper highlights the recent advancements in text-to-image generation models, which have shown significant progress in generating highly realistic, accurate, and diverse images from given text prompts. Strengths: The paper exhibits several strengths across the dimensions of originality, quality, clarity, and significance: 1. Originality: The paper introduces SuTI, a novel subject-driven text-to-image generator that uses in-context learning instead of subject-specific fine-tuning. This approach is original and innovative, as it deviates from the conventional optimization-based methods, offering a faster and more efficient solution. 2. Quality: The quality of the paper is evident in the rigorous testing and validation of the SuTI model. The model has been benchmarked against existing models on DreamBench and DreamBench-v2, where it has shown superior performance. This demonstrates the robustness and reliability of the model. 3. Clarity: The paper is well-structured and clear in its presentation of the SuTI model. It provides a comprehensive explanation of the model's workings, its applications, and its performance in various tests. The use of visual aids and examples further enhances the clarity of the paper. 4. Significance: The significance of the paper lies in its contribution to the field of text-to-image generation. By introducing a faster and more efficient model, the paper pushes the boundaries of what is currently possible in this field. This could have far-reaching implications for a variety of applications, including content creation, design, and more. Weaknesses: One weakness of the paper is the lack of discussion around the cost and complexity of constructing the training dataset. The process of creating a comprehensive and diverse dataset for training a model like SuTI can be a significant undertaking, both in terms of time and resources. The paper does not delve into the specifics of this process, leaving readers without a clear understanding of the potential challenges and costs associated with data collection and preparation. This lack of transparency may make it difficult for others to replicate the study or apply the model in different contexts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Dataset Construction: Could you provide more details about the process of constructing the training dataset for SuTI? Specifically, how much time and resources were required to create the dataset? Were there any significant challenges encountered during this process? 2. Model Scalability: How scalable is the SuTI model with respect to the size and diversity of the training dataset? If the dataset were to be expanded or updated with new subjects or scenarios, would this significantly impact the model's performance or the resources required for training? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comment #1 “Dataset Construction: Could you provide more details about the process of constructing the training dataset for SuTI? Specifically, how much time and resources were required to create the dataset?” Since we can parallelize the DreamBooth training and generation, we set up 100 instances of TPU v4-8 (128G with 4 chips each). Each instance will take one “subject cluster” and then train one DreamBooth for 500 steps to generate the outputs. Once the output is generated, the model will reset to original weight to move on to the next “subject cluster” without saving any checkpoint. The whole dataset construction takes roughly 100 TPU v4-8 for two weeks. We don’t need to store the DreamBooth checkpoint, so the storage cost is negligible. Comment #2 “Were there any significant challenges encountered during this process?” This is a very insightful question. Yes, we did encounter some challenges: (1) The memory consumption of the original DreamBooth recipe is very high due to the usage of Adam optimizer, which already reaches the memory ceiling of TPU v4-8. Somehow the memory reset will cause an OOM issue. We replaced Adam with Adafactor to resolve this issue, which decreases the memory consumption significantly to enable our pipeline. (2) The other difficulty is how to filter the DreamBooth failure outputs. We tested different variants and conducted lots of human study to decide the CLIP-delta threshold. But still, this threshold is not perfectn with lots of false positives and false negatives. The community is still in dire need for a better automatic metric to evaluate subject fidelity. Comment #3 “Model Scalability: How scalable is the SuTI model with respect to the size and diversity of the training dataset? If the dataset were to be expanded or updated with new subjects or scenarios, would this significantly impact the model's performance or the resources required for training?” So far, we found the model to be quite scalable with respect to the number of subjects and skill sets we added to the training dataset. For now, we have 5-6 skill-sets and millions of subjects in the dataset. We are still exploring newer skillsets right now, like image editing, image inpainting, etc. One possible way to extend the model’s capacity is to use a small adapter layer to increase models’ capacity to handle more diverse inputs.
Summary: This paper introduces SuTI for the subject-driven text-to-image (T2I) generation method. Numerous expert models are first trained on millions of image clusters collected from the internet, each focuses on a specific visual subject. A dataset is then created, consisting of concept images, target prompts, and corresponding target images. By training the apprentice model on this dataset, SuTI can generate high-quality and customized subject-specific images without the need for test time fine-tuning. Both the qualitative and quantitative experiments are performed with the baselines. Strengths: - The presented method is well-motivated and easy to understand. - SuTI demonstrates fast generation and has a broad domain of applicability. - The paper includes extensive experimental evaluations with impressive results. Weaknesses: Some details of the experiments require clarification: - The hyperparameters used for training the baselines, such as the number of training iterations and learning rates, should be provided. - More information is needed regarding the human evaluation, including the number of questions, the number of evaluated images, and the number of users involved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - More details about the experiments should be clarified. - Can the proposed model perform multiple concept generation, which is also important in concept image generation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comment #1 “The hyperparameters used for training the baselines, such as the number of training iterations and learning rates, should be provided”: Thanks for the reminder. We mostly use the official colab from these papers and their default hyper-paramters for the baselines. We will add these details to the revision. Comment #2 “More information is needed regarding the human evaluation, including the number of questions, the number of evaluated images, and the number of users involved.” We will add our human evaluation guideline into the Appendix in the revision. The evaluation is done by several trained raters on the DreamBenchv2’s 220 images. Comment #3 “Can the proposed model perform multiple concept generation, which is also important in concept image generation?” We evaluated SuTI on 2-concept image generation. The model works well on some easy combinations like “dogs [D] besides vase [V]” or “cat [C] behind a bowl [B]”. But it would fail on more difficult combinations (with subject interaction) like “dog [D] wearing shoes [S]”, etc. We will put some of these results in the revision. --- Rebuttal Comment 1.1: Title: Follow-up Comment: We would like to follow up on our rebuttal. If there are any additional outstanding concerns that you would like us to address, please let us know. Thank you and we look forward to your response. --- Rebuttal 2: Title: Official Comment by Reviewer F8hk Comment: Thanks to the authors for their efforts. I have read the rebuttal and the comments from the other reviewers, and most of my concerns have been addressed. I agree with Reviewer JnTg and look forward to the code being open source. I decided to raise my score from 5 to 6.
Summary: The paper proposes an in-context learning method for model customization given personalized objects. The method first collects a large-scale dataset of custom concepts ensuring all images in each custom concept cluster are similar to each other and fine-tunes the model for each concept using Dreambooth. These expert models are then used to get the dataset to train the in-context learning method. It takes a few image text pair of the concept and a new text prompt to generate the image corresponding to the new prompt. The method is based on the Re-Imagen framework. To ensure high quality image text pair dataset from dreambooth models, CLIP based feature similarity threshold is applied. Strengths: The method is one of the first works on in-context learning for model customization. This prevents the time-consuming step of fine-tuning models given any new subject images. Both qualitative and quantitative results show that the method performs on par or better than existing zero-shot or fine-tuning methods. The paper is well-written, easy to understand, and has extensive ablation experiments to validate the importance of different aspects of the method. Weaknesses: Why is there a need to train the Dreambooth expert models for training the final SuTI model? How does the performance change if the collected dataset itself is used directly to train the final model with N-1 image text pairs for in-context input and 1 sample as the new inference? It would be great to have an analysis regarding that. Other objects included in the image often take the characteristics of the main subject, e.g., kite in Figure 10 2nd column last row or the british shorthair cat in Fig. 9 1st column last row in the appendix. Does adding a specific characterization in the text prompt for the other subject prevent that? Or is it overfitting in such scenarios? Does having more in-context demonstrations prevent that? Can it be combined with other editing-based methods like SD-Edit or similar approaches to edit a specific region of the image or combine multiple specific subjects in the same image? E.g., to generate the canine dog of Figure 5 eating the cherry bowl of Figure 4. The editing example shown in Figure 5 changes the whole image instead of just the birds in the TV show. Another point I would like to make is that it would have been great to also show the performance of SuTI with the SD backbone. This is not a weakness per se, and it doesn't affect the final rating. It's excellent work, but it would be helpful to assess the method's performance on such open-source models. Some of the baseline methods shown in the paper are already with the SD backbone. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Minor questions and comments: 1. Does having a variable number of demonstrations instead of 3 have any effect on the final model performance? 2. It would be nice to include some sample descriptive text predicted by the language model for various subjects. Is it usually a class word corresponding to the subject or includes more details regarding the subject? 3. Is there any reason for low photorealism score of SuTI compared to DreamBooth? 4. Line 94. "to to" typo 5. Line 116. "models" -> "model" 6. Line 209. Probably a missing word in the sentence. "with largest space consumption" 7. Line 280. "generated generated" repeated words. 8. Line 286. “GAN=based” -> “Gan-based” 9. Line 306. "subjects are contain" -> "subjects contain" 10. Appendix Line 429. "Subject-Drive" -> "Subject-Driven" 11. Appendix Line 430. "task of our paper aims to solve" -> "task our paper aims to solve" Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the highly detailed and constructive feedback! Comment #1 “​​Why is there a need to train the Dreambooth expert models for training the final SuTI model? How does the performance change if the collected dataset itself is used directly to train the final model with N-1 image text pairs for in-context input and 1 sample as the new inference? It would be great to have an analysis regarding that.” This is exactly our initial experiments to build SuTI. However, as the images in the same clusters are too similar, often near duplicates (particularly for rigid body objects), so after training, the model falls into a local optimum of copy-pasting the reference image without referring to the text instruction at all. We spent weeks of efforts to increase the diversity within the image cluster to discourage this behavior, however, the issue was not fully resolved. Therefore, we took the hard route of using DreamBooth + LLM to generate diverse subject images under different context/visual scenes to generate highly different outputs. This proves to be extremely effective in overcoming the copy-paste behavior. We will add more discussion about these failed efforts in the revision. Comment #2 “Other objects included in the image often take the characteristics of the main subject, e.g., kite in Figure 10 2nd column last row or the british shorthair cat in Fig. 9 1st column last row in the appendix. Does adding a specific characterization in the text prompt for the other subject prevent that? Or is it overfitting in such scenarios? Does having more in-context demonstrations prevent that?” This is a very common issue from the base text-to-image diffusion model, which sometimes does poorly on understanding language compositionality. We have some empirical evidence to support: (1) “adding more specific characterization” would generally help in some cases, e.g., the kite case can be fixed, but the “british shorthair” won’t be fixed. (2) “adding more in-context examples” does not seem to help much in this case. “More examples” can increase the identity preservation, but does not fully address the compositionality issue. (3) this is not necessarily “overfitting”, rather this compositionality issue is inherited from the original text-to-image generation model (see https://arxiv.org/abs/2212.05032). Comment #3 “Can it be combined with other editing-based methods like SD-Edit or similar approaches to edit a specific region of the image or combine multiple specific subjects in the same image? E.g., to generate the canine dog of Figure 5 eating the cherry bowl of Figure 4. The editing example shown in Figure 5 changes the whole image instead of just the birds in the TV show.” “Combine with Region-based in-painting”: Yes, it can definitely be combined with SD-Edit/DiffEdit to only in-paint a specific area of the image. We have some results that we haven't put it in the paper yet but will dosoin the revision. “Multiple Subject Composition”: We have tested this composition ability and found that SuTI does generalize to two subjects in some easy cases. But the failure rate is still pretty high, therefore, we did not include this part in the paper. Moving forward, multi-subject image generation will definitely be a priority. Comment #4 “Another point I would like to make is that it would have been great to also show the performance of SuTI with the SD backbone. This is not a weakness per se, and it doesn't affect the final rating. It's excellent work, but it would be helpful to assess the method's performance on such open-source models. Some of the baseline methods shown in the paper are already with the SD backbone.” We definitely want to make SuTI accessible to the general public. Since the submission, we have made some good progress toward this direction. Particularly, we are actively investigating the possibility of releasing the SuTI as a model-based model API service. Comment #5 Response to other individual questions. Q1: Yes, normally the more the better. But the training will be slower and memory consumption will be higher. We pick 3 as a trade-off between memory consumption vs quality. Q2: Sure, we will add more examples in the Appendix. The data contains both: (1) general class word corresponding like “shoes”, (2) more specific subject like “Nike Air … Shoes”. We hope this two modes can help the model ground better. Q3: There are two potential reasons: (1) SuTI fits the output of DreamBooth output, which has a slightly different distribution than natural image, especially under large classifier-free guidance. The model might overfit to this new distribution, (2) SuTI only has 2B parameters, while DreamBooth has 2B parameters per subject, which leads to different model capacity. This could cause the photorealism to be different. Q4-Q11: We will fix the typos. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The rebuttal addresses most of my concerns. Looking forward to the open-source model/API as well. Thanks. I will keep my current rating.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive feedback. Here we want to highlight a few things: 1. First of all, we are advocator of open research and strive to make everything publicly accessible. We plan to make the model or API publicly available to public before paper publication although we can't guarantee the exact date and time at this point. 2. Regarding the the combination of SuTI and DreamBooth, we conducted more experiments and show our results in the attached pdf. 3. Regarding the ablation study to only use the clustered images to train SuTI, i.e. without relying on the synthetic images by DreamBooth. We provide more evidence in the attached pdf. Pdf: /pdf/d457a336bda8a67c585d13a15e28381f78e96caa.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a new model that can perform subject-driven text-to-image generation. Instead of fine tuning a leerte pretrained model on each subject, the authors use apprentice learning to first construct a virtual dataset from a large number of teacher models, each specific to a kind of subjects, and then have the student model learn from the constructed dataset. The authors have also shown strong qualitative and quantitative results and ablation study. Strengths: The paper is very well written and easy to follow. The results seem very promising. The paper tackles an interesting problem that is relevant to the applications. Weaknesses: 1. This paper could have been impactful if the authors plan to open source or provide a way to reproduce the results. However, from the checklist the authors indicate that they have no plan to open source the model. Given that this model is trained with hundreds of TPUs, unless the authors provide the pretrained model or explicit instructions on how to reproduce the results in a non-cost prohibitive way, I don’t really see a way for the peer researchers to verify the results, nor do I see any real benefits for the practitioners from this paper. 2. The authors performed human evaluations. However, details about the instructions given to human evaluators and the compensation are not provided. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Please address the weakness mentioned above. I am happy to change my score if problems related to my concerns are resolved. How long does it take to train the apprentice model? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: There is no discussion of the details about the human evaluation (I.e. the instructions given to human and the compensations). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comment #1 “This paper could have been impactful if the authors plan to open source or provide a way to reproduce the results. However, from the checklist the authors indicate that they have no plan to open source the model.” We are in full agreement that it would be far better for the community to release the code and/or model, and we are actively working towards doing that. Although we cannot make guarantees at this point, we do plan to make the model or API publicly available to researchers before paper publication. When the paper was initially submitted, we had not yet established a concrete plan for open sourcing, so we were conservative and marked 'no' in the checklist to avoid potential miscommunication. Please note that according to the NeurIPS 2023 paper checklist guideline (https://neurips.cc/public/guides/PaperChecklist), particularly in the section 6 experiments, it is explicitly stated that “Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). Comment #2 “I don’t really see a way for the peer researchers to verify the results”: We respectfully disagree. First of all, we would release all of our test prompts, and generate results, which enables the researchers to do comparison. Secondly, we have included most key technical details in the paper, such that readers can run their experiments to reproduce the results. Finally, as stated above, we are actively working on releasing the model, to enable researchers to reproduce our results. Comment #3 “nor do I see any real benefits for the practitioners from this paper”: We believe that our paper provides some deep insight for the research community. For example, we found that fine-tuning on SuTI is better than fine-tuning on the original Imagen model. The subject fidelity improves quite a lot. We plan to put some of these results in the revision. The paper also provides an in-depth description of our novel methodology, which can potentially be replicated with smaller resources on public models if some constraints are relaxed. Moreover, as we mentioned, we are actively investigating the possibility of releasing the SuTI as a model-based model API service. Comment #4 “The authors performed human evaluations. However, details about the instructions given to human evaluators and the compensation are not provided”: Thanks for the comment. We have performed a training session to our human annotators to ensure their evaluation is calibrated, meanwhile the compensation to our annotator is paid hourly instead of per annotation, which incentivizes the annotators to spend more time for each annotation. We will put our annotation guideline and other details to the Appendix in the revision. In addition, we also like to mention that the automatic evaluation on DreamBench shows that SuTI outperforms other methods (best on 2 eval metric, and on par with Imagen based DreamBooth on 1 eval metric), which is aligned with the human evaluation. Comment #5 “How long does it take to train the apprentice model?”: The training time cost is less than 24 hours on 64 TPUs on Google Cloud Platform (actual duration depends on the version of TPU used. For your reference, training on TPU v4 takes ~10 hours). --- Rebuttal Comment 1.1: Title: About rebuttal Comment: Thanks again for providing the detailed reviews! We are wondering whether you have read our rebuttal. If there is anything else we can discuss or clarify, please let us know. We are more than happy to discuss further during the author-reviewer discussion period. --- Rebuttal Comment 1.2: Title: Thank you for your response Comment: Thank you for your response. Based on the prospective open source model, I will change my rating from rejection to acceptance. And I do want to comment that Comment #2 Since the model is extremely computationally expensive, even if the authors provide technical details in the paper, it is still very unlikely for general readers to reproduce the results without the open source model. Comment #3 I agree with the author rebuttal that the paper can provide some insight to the methodology. Thank you for your efforts to open source this model. Looking forward to its release. --- Reply to Comment 1.2.1: Title: Thank you for your response Comment: Thanks a lot! We will try our best to release our model or API to the public.
null
null
null
null
null
null
Granger Components Analysis: Unsupervised learning of latent temporal dependencies
Accept (poster)
Summary: This paper presented a novel unsupervised learning approach using Granger Causality by identifying the driving/driven components. This method was demonstrated on EEG and fMRI data, in coincide with the neurophysiological facts. Strengths: The paper proposed an algorithm to identify the pairwise causal structure between latent variables from a multivariate observational data set and the results are supprted by simulation data and empirical data. Weaknesses: 1) The number of latent variables is a key parameter for this unsupervised learning, however it is pre-defined without any adaptive mechanism. similarly, no adaptive mechanism for L. 2) The literature is outdated and there is no performance comparison against similar algorithms for causal inference. PCA and ICA are not algorithms for causal inference. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: No further questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The simulation is limited to a specific model without variations, for example, the noise level, the model complexity, the length of the time series, etc. Makes it is difficult to assess the performance and the generaslibility of the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses ### The number of latent variables is a key parameter for this unsupervised learning, however it is pre-defined without any adaptive mechanism. similarly, no adaptive mechanism for L. We acknowledge that we did not propose an adaptive mechanism for selecting the number of latent pairs $P$, where the latent pairs are defined as $(y_i,z_i),~i=1,2,...P$. The proposed method, like PCA and CCA, produces a natural ordering of these pairs, where the first pair exhibits the largest value of Granger causality, the second pair the second largest, and so forth: $J_{y_1 \rightarrow z_1} > J_{y_2 \rightarrow z_2} > ... > J_{y_P \rightarrow z_P}$ > 0. Similar to what is done with PCA and CCA, the number of component pairs may be selected by observing the "spectrum" of the proposed method. Please note that the objective function of the algorithm $J_{y_i \rightarrow z_i}$ is monotonically related to the "strength of causality" $G_{y_i \rightarrow z_i}$ described in the global response. For example, in the simulation results of Figure 2(b), the first two pairs yield values near 0.1, while the third is at almost zero. This indicates that $P=2$ would have been the most appropriate choice in this example. In practice, selecting a value of $P$ that is near the knee point of the data's singular value spectrum, followed by observing the $J_{y_i \rightarrow z_i}$ values returned by the proposed method, is the anticipated approach to selection of $P$. In terms of selecting $L$, this reflects a tradeoff between capturing more of the auto- and cross-correlation structure in the data and minimizing the number of model parameters, which scales with $L^2$. Knowledge of the data's autoregressive structure is helpful here: for example, for the EEG dataset, we selected $L=8$ to capture 0.5 seconds, knowing that this was about the time scale of information integration in the motor system. For the fMRI dataset, we selected $L=4$ based on the time course of the hemodynamic response function and past studies that employed this value. ### The literature is outdated and there is no performance comparison against similar algorithms for causal inference. PCA and ICA are not algorithms for causal inference. This brings about an interesting and subtle point: Granger causality is more about "prediction" than it is about true causality in the physical sense. Much of the literature on causal inference is concerned with the latter, whereas Granger causality is generally employed as a statistical assay of temporal precedence (e.g. akin to a hypothesis test). The proposed contribution of this paper, while borrowing the idea of Granger causality, is really a new approach to unsupervised component analysis. This motivated the use of classical PCA and ICA as comparison methods, which we were admittedly not 100% satisfied with. Note that methods such as Canonical Correlation Analysis require two views of a dataset, which precludes its use as a comparison here. Based on the feedback from reviewer 57Zq, we have now tested MVARICA and found that it yields surprisingly low values for the strength of causality, and components that do not flip with respect to the side of the cued hand in the EEG motor imagery dataset (please see panels e and f in the supplemental PDF). --- Rebuttal Comment 1.1: Comment: Many thanks for the authors' clarification and response. It is an impressive work and I think that further comparison with the SOTA and performance evaluation is necessary since this is a very widely studied topic.
Summary: The paper formulates the problem of learning a pair of spatial projections that optimize a criterion based on the Granger causality between the resulting components, which is itself based on regressing the first, driving, component to the second, driven, component using a Wiener filter and the converse for the time-reversed signals. A block coordinate descent algorithm is proposed to solve for one spatial projection while the other is fixed and vice versa. Experimental results on EEG and resting state fMRI data illustrate the identification of meaningful spatial filters. Strengths: The paper is well motivated, well written, and clear. Synthetic experiments are simple but results are convincing. The method is applied to two different modalities of neuroimaging data and paradigms (EEG during motor imagery tasks and resting state fMRI) and the results are discussed in depth. Weaknesses: Explicit statement of the assumptions about the nature of the relationship (linear or non-linear), time-invariant, etc. is lacking. Standard Granger causality assumes a linear, time-invariant relationship. These assumptions should be stated when introducing the pairs in (2). The algorithm could be made more succinct by removing some of the redundancies. Minor: Line 79 should be clarified that $\mathbf{y}_p(t)$ is also lagged form. Line 93 variables should be defined. I assume that there exists scalar $c$ for any choice of $a$ and $b$. On line 114, the statement 'the driven signal is not explicitly removed.' is a bit misleading. While it is not removed the explainable variance associated this component is removed by the filtering (spatiotemporal regression). In algorithm 1, the dependency between the cost function $J$ and $\mathbf{X}$ is not explicit. *Nit picks:* I find the capitalization of methods beyond proper nouns a bit jarring. "Granger Causlity" -> "Granger causality". "Kernel CCA" -> "kernel CCA". Lines 70, 81 closing double quotes are wrong direction. Lines 132–133 and 160–161 "We asked GCA" seems odd phrasing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is automatic differentiation an option over the numerical differentiation or the manually coded derivative? How would the method perform on synthetic data where the sources are instantaneously mixed only (like the ideal case for ICA)? Was the use of subject versus group explored? Or was it a problem of lack of data? Line 194: Is each ROI summarized by its mean or principal component? This is not mentioned. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A stopping criterion for estimating the number of sources is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Questions ### Is automatic differentiation an option over the numerical differentiation or the manually coded derivative? Automatic differentiation is an option over the manually coded derivative. By "automatic" differentiation, we mean that the gradient is computing using finite differences instead of the analytical formula. We will clarify this in the revised version. ### How would the method perform on synthetic data where the sources are instantaneously mixed only (like the ideal case for ICA)? If we understand the question correctly, this is the result of Fig 2h-k (top of page 6). We used the fully independent source case as a control to show that the method is not able to recover sources that exhibit zero Granger Causality amongst each other. We hope that this addresses the Reviewer's question. ### Was the use of subject versus group explored? Or was it a problem of lack of data? A subject-specific analysis was not explored thus far. The reason, as you guessed, was that the amount of data for each subject was relatively small relative to the dimensionality of the covariance matrices that are being estimated by the algorithm ($(LD)^2$, where $L$ = max time lag and $D$ = number of sensors). The goal of the analyses in the paper was to test the proposed method with enough data so that estimation of the covariance matrices was not obscuring the evaluation of the technique's ability to recover meaningful sources. ### Line 194: Is each ROI summarized by its mean or principal component? This is not mentioned. Thanks for catching this. It was the mean of all grey matter voxels in the ROI. We are adding this clarification to the revised manuscript. ## Weaknesses ### Explicit statement of the assumptions about the nature of the relationship (linear or non-linear), time-invariant, etc. is lacking. Standard Granger causality assumes a linear, time-invariant relationship. These assumptions should be stated when introducing the pairs in (2). Agreed -- this is being added to the revised version. ### The algorithm could be made more succinct by removing some of the redundancies. Indeed, we will revise the algorithm to make it more compact. ### Minor: Line 79 should be clarified that is also lagged form. Thanks for catching this -- we are adding the definition of ${\bf{y}}_p$ to the revised version. ### Line 93 variables should be defined. I assume that there exists scalar $c$ for any choice of $a$ and $b$. Indeed, we are adding that $a$, $b$, and $c$ are arbitrary scalars. ### On line 114, the statement 'the driven signal is not explicitly removed.' is a bit misleading. While it is not removed the explainable variance associated this component is removed by the filtering (spatiotemporal regression). Yes, good catch. The statement is incorrect (please see also the Rebuttal to Reviewer BFyu) and has been removed. Only $y_1(t)$ is being removed. This does not remove $z_1(t)$ from the data. Rather, the component of $y_1$ that is present in $z_1$ is removed with the spatiotemporal regression. ### In algorithm 1, the dependency between the cost function and is not explicit. We are adding the dependence of $J$ on the data $\bf{X}$ to the description of algorithm 1. ### Nit picks: I find the capitalization of methods beyond proper nouns a bit jarring. "Granger Causlity" -> "Granger causality". "Kernel CCA" -> "kernel CCA". We have removed the capitalization from "Causality" and "Kernel". ### Lines 70, 81 closing double quotes are wrong direction. Thanks for catching this -- it has been fixed. ### Lines 132–133 and 160–161 "We asked GCA" seems odd phrasing. Agreed. We are rephrasing this to "We estimated $P=3$ pairs of Granger components." --- Rebuttal Comment 1.1: Title: after a read of the rebuttal Comment: I want to thank the authors for a nice rebuttal that most of my questions. Automatic differentiation $\neq$ numerical differentiation. Please fix as this is misleading. --- Reply to Comment 1.1.1: Title: Thanks for the clarification Comment: Thanks for catching this -- indeed, automatic differentiation is not the proper nomenclature in this case. It will be changed to "numerical".
Summary: This paper proposes a novel (blind) source separation method that extracts pairs of components from multivariate time series between which Granger causality (GC) is maximal. This can be a very useful tool to assess direction information flow between brain areas in an unsupervised way, without having to specify the areas a-priori. The method is very elegant and implemented in a straightforward way by setting up a corresponding optimization problem and solving it via block-coordinate descent alternating between updates of the projection vectors for the sending and receiving source. Analytic gradients are provided but also autograd is reported to work well. An interesting deflation scheme is also provided whereby the sending source is projected out in each step. Thus, the same sending source cannot be found in multiple GC component pairs, but the (residual) of a sending source can play a role as either sender of receiver in a subsequently extracted pair. The method also implements time-reversal, a method to robustify GC estimates with respect to artifacts of volume conduction. A small set of simulations illustrates the convincing properties of the method, and a convincing application to motor-imagery brain-computer interface data is also provided. Strengths: The paper proposes an elegant and potentially useful method. The derivation of the method is easy to follow, apart from minor gaps. The technical parts are sound. The simulations and real data results are convincing and provide a good picture of the capabilities of the method. The writing is clear. Overall, a nice and self-contained paper. Weaknesses: The simulations and real data analyses could be better developed. The simulations could be more quantitative, e.g. studying 100 systems instead of only one, and reporting distribution of reconstruction metrics. The impact of factors such as the SNR could be systematically studied, and different types of noise could be studied. More methods could be included in the empirical comparisons. For example, BSS methods like MVARICA [1] and SCSA [2] do not assume independent components but model the sources exactly by an MVAR model, from which GC between every pair of components can be assessed. In the BCI context, the extraction of class-specific sources using CSP [3] or SMR oscillations using SSD [4] could be compared to the proposed method. Although I understand that not for all methods working code may be found. Theoretically, it would be critical to also discuss the identifiability of the model. Linear mixtures of MVAR processes are again MVAR models and a valid question is why the maximization of GC should provide the “true” unmixing. This is especially critical as research has shown that even a mixing of independent sources can induce spurious GC [5-9]. Moreover, no non-Gaussianity of residuals as in [2] is assumed to guide the reconstruction. [1] https://www.sciencedirect.com/science/article/abs/pii/S1053811908008549 [2] https://ieeexplore.ieee.org/abstract/document/5466024 [3] https://ieeexplore.ieee.org/abstract/document/4408441 [4] https://www.sciencedirect.com/science/article/abs/pii/S1053811914005503 [5] https://www.sciencedirect.com/science/article/abs/pii/S1053811912009469 [6] https://www.sciencedirect.com/science/article/abs/pii/S105381191401009X [7] https://ieeexplore.ieee.org/abstract/document/7412766 [8] https://link.springer.com/article/10.1007/s10548-016-0538-7 [9] https://www.frontiersin.org/articles/10.3389/fncom.2016.00121/full Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - It would be interesting if the authors could provide more information about the extracted sources in the motor imagery real data example. Could you please plot power spectra of the sources compared to some of the relevant channels over the motor areas. Similarly, it would be interesting to see the GC or TRGC connectivity spectra of the extracted component look like in comparison to the power spectrum. For this, spectral G-causality as implemented in the MVGC toolbox could be used. Finally, the corresponding forward models could be mapped into the brain using some inverse modeling techniques, which could give further insight into the physiological relevance of the extracted sources. [10] analyze functional connectivity before and during motor imagery using undirected FC metrics, and it could be interesting to see if the sources are similar. - Some of the derivations are not immediately clear to me. The step from Eqs. (3) and (5) to (6) is not completely clear. Perhaps a few intermediate steps could be added? When expanding the square of the residual, should there be no mixed terms (products of z and z_p)? - z_p and y_p should be defined. These are the temporally embedded multivariate versions of z and y. In Eqs (3) and (4) these should still depend on (t) - line 93: this is not the only possibly undesirable case, one can also show that spurious GC and emerge from mixtures of independent sources. E.g. x_1 = s + n_1 and x_2 = s + n_2 for independent noises n_1 and n_2 can lead to GC between x_1 and x_2 in either direction. - Line 95 (“note that”): this is not true in that strict sense. What can be shown is that the flow from y -> z is reduced compared to the original temporal order, but there is no guarantee that it would vanish or even reverse (see [7]) - However, it is true that, for a given unidirectional flow x -> y, subtracting the time reversed GC affects both the direction x -> y and y -> x (essentially adding a negative term for the direction y -> x), it makes sense to not only consider (and optimize) time reversed differences GCR_(x->y) = GC_(x->y) – GC_(x_rev -> y_rev) but also to directly work with the net GC between both directions: TRGC_(x->y) = GCR_(x->y) - GCR_(y->x). - Supplement “The forward model expresses the level of correlation”: technically it is pretty much the covariance. - No reference for the origin of the fMRI dataset is given and no IRB approval for that study is mentioned. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper is overall solid; however, a theoretical discussion of the model identifiability would be indicated, also considering the proposed deflation scheme. The simulations and real data analyses could be extended. Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Questions ### It would be interesting if the authors could provide more information about the extracted sources in the motor imagery real data example. Could you please plot power spectra of the sources compared to some of the relevant channels over the motor areas. Similarly, it would be interesting to see the GC or TRGC connectivity spectra of the extracted component look like in comparison to the power spectrum. For this, spectral G-causality as implemented in the MVGC toolbox could be used. Finally, the corresponding forward models could be mapped into the brain using some inverse modeling techniques, which could give further insight into the physiological relevance of the extracted sources. [10] analyze functional connectivity before and during motor imagery using undirected FC metrics, and it could be interesting to see if the sources are similar. We have added power spectral plots of the GCs (computed with the multitaper method) alongside the spectra of the relevant single electrodes to the attached PDF. Of note, it is interesting that $y_2(t)$ (i.e., the driving signal of GC 2) has very large alpha power, where as the corresponding driven signal $z_2(t)$ has very low alpha power. This appears to be consistent with the driving signal being preparatory (alpha is synchronized) and the driven signal reflecting execution (alpha is desynchronized). We agree that connectivity spectra and inverse modeling of the components are very interesting in this application. The results of these analyses are not yet available due to time constraints of the rebuttal period. ### Some of the derivations are not immediately clear to me. The step from Eqs. (3) and (5) to (6) is not completely clear. Perhaps a few intermediate steps could be added? When expanding the square of the residual, should there be no mixed terms (products of z and z_p)? We derive Eq. (6) here. The target signal in the linear regression is $z(t)$. The predictors are $\textbf{z}_p(t)$. The Mean Squared Error is $\left< \epsilon_z^2 \right> = E [ ( z(t) - \textbf{h}^T \textbf{z}_p(t) )^2 ]$ where $\textbf{h}$ is the filter predicting $z(t)$ from $\textbf{z}_p(t)$. We would like to first identify the filter that minimizes the MSE: $\textbf{h}^{\ast} = \mathrm{arg~min}_{\textbf{h}} ~ \left< \epsilon_z^2 \right> $ From the definition of the Wiener filter, this given by: $\textbf{h}^{\ast} = \textbf{Q}^{-1} \textbf{q} $ where $\textbf{Q}=E [\textbf{z}_p(t) \textbf{z}_p^T(t)] $ and $\textbf{q}=E [\textbf{z}_p(t) z(t)] $. The value of the residual when employing the optimal Wiener filter is: $\epsilon_r = z(t) - {\textbf{h}^{\ast}}^T \textbf{z}_p(t) = z(t) - \textbf{q}^T \textbf{Q}^{-1} \textbf{z}_p(t) $ Taking the squared expectation, we obtain the expression for the minimum mean squared error (MMSE): $\left< \epsilon_r^2 \right> = E [ \left( z(t) - \textbf{q}^T \textbf{Q}^{-1} \textbf{z}_p(t) \right) \left( z(t) - \textbf{q}^T \textbf{Q}^{-1} \textbf{z}_p(t) \right) ] $ $\left< \epsilon_r^2 \right> = E [ z^2(t) - 2 \textbf{q}^T \textbf{Q}^{-1} z(t) \textbf{z}_p(t) + \textbf{q}^T \textbf{Q}^{-1} \textbf{z}_p(t) \textbf{z}_p^T(t) \textbf{Q}^{-1} \textbf{q} ] $ $\left< \epsilon_r^2 \right> = E [ z^2(t) ] - 2 \textbf{q}^T \textbf{Q}^{-1} E [ z(t) \textbf{z}_p(t) ] + \textbf{q}^T \textbf{Q}^{-1} E [ \textbf{z}_p(t) \textbf{z}_p^T(t) ] \textbf{Q}^{-1} \textbf{q} $ $\left< \epsilon_r^2 \right> = \sigma_z^2 - 2 \textbf{q}^T \textbf{Q}^{-1} \textbf{q}+ \textbf{q}^T \textbf{Q}^{-1} \textbf{Q} \textbf{Q}^{-1} \textbf{q} $ $\left< \epsilon_r^2 \right> = \sigma_z^2 - \textbf{q}^T \textbf{Q}^{-1} \textbf{q} $ This is being added to the derivation in the Supplementary Material. ### z_p and y_p should be defined. These are the temporally embedded multivariate versions of z and y. In Eqs (3) and (4) these should still depend on (t) Thanks for catching this. It has been fixed in the revised version. ### line 93: this is not the only possibly undesirable case, one can also show that spurious GC and emerge from mixtures of independent sources. E.g. x_1 = s + n_1 and x_2 = s + n_2 for independent noises n_1 and n_2 can lead to GC between x_1 and x_2 in either direction. Interesting: a reference to this would be helpful so that we can add a note to the manuscript. Thanks. ### Line 95 (“note that”): this is not true in that strict sense. What can be shown is that the flow from y -> z is reduced compared to the original temporal order, but there is no guarantee that it would vanish or even reverse (see [7]) This is very important and greatly appreciated. A more careful reading of [7] is underway and the statement is being revised. ### However, it is true that, for a given unidirectional flow x -> y, subtracting the time reversed GC affects both the direction x -> y and y -> x (essentially adding a negative term for the direction y -> x), it makes sense to not only consider (and optimize) time reversed differences GCR_(x->y) = GC_(x->y) – GC_(x_rev -> y_rev) but also to directly work with the net GC between both directions: TRGC_(x->y) = GCR_(x->y) - GCR_(y->x). This insight into time-reversed GC is very helpful and may improve the algorithm. Modified objective functions that reflect the net GC will need to be evaluated given these comments. ### Supplement “The forward model expresses the level of correlation”: technically it is pretty much the covariance. Agreed and fixed. ### No reference for the origin of the fMRI dataset is given and no IRB approval for that study is mentioned. The omission of any reference to the fMRI data was deliberate, and motivated by the double-blind review policy of neurIPS. The dataset has been previously published (citation will be provided in revised version) and all procedures were approved by the institution's IRB. (Further response limited by character limit) --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: I thank the authors for their clarifications. At the same time I agree with the AC that it would actually be quite nice to provide a statistical approach for assessing the statistical significance of the interaction of a given pair of extracted sources at any step of the deflation. Since these are multivariate fits with multiple parameters, overfitting can occur and has to be accounted for. That is, a null distribution consistent with independent sources but for the same degree of (over)fitting should be constructed. Regarding my comment that instantaneous mixing of independent sources can lead to non-zero GC, I believe that the following references contain respective examples: Winkler, I., Panknin, D., Bartz, D., Müller, K. R., & Haufe, S. (2016). Validity of time reversal for testing Granger causality. IEEE Transactions on Signal Processing, 64(11), 2746-2760. Brunner, C., Billinger, M., Seeber, M., Mullen, T. R., & Makeig, S. (2016). Volume conduction influences scalp-based connectivity estimates. Frontiers in computational neuroscience, 10, 121. Van de Steen, F., Faes, L., Karahan, E., Songsiri, J., Valdes-Sosa, P. A., & Marinazzo, D. (2019). Critical comments on EEG sensor space dynamical connectivity analysis. Brain topography, 32, 643-654. The point being that if independent sources s(t) are mixed into sensor x(t) = M * s(t) via a mixing matrix M , then the coefficient matrices of an MVAR model of x(r) are also just the coefficients of the MVAR models of the sources s(t), transformed by the same matrix M. So, even if the underlying sources are independent (diagonal MVAR coefficients), the mixed sensors will have an MVAR representation with offdiagonal terms by virtue of M, indicating Granger causality. --- Reply to Comment 1.1.1: Title: Response to AC comment Comment: Thank you for the feedback and for the references. When reading the explanation of how the mixing matrix introduces spurious GCs, it appears as if this property is a strong rationale for the proposed method (i.e., demixing the signals into components prior to computing GC). I don't appear to have access to the AC's comment about statistical significance (unless I am not seeing it in the system), but it certainly seems like a good idea and not difficult to add. In particular, this could be implemented using mock surrogate data [1] that retains the power spectrum of the original signals but destroys the phase (and thus GCs among the latent signals). The algorithm could then be run over ~1000 mock records to construct a null distribution of GC values against which the true value may be compared. This would be performed after each deflation, as suggested. [1] Theiler, J., Eubank, S., Longtin, A., Galdrikian, B., & Farmer, J. D. (1992). Testing for nonlinearity in time series: the method of surrogate data. Physica D: Nonlinear Phenomena, 58(1-4), 77-94.
Summary: The paper proposes a factorization model to extract P pairs of latent components from a multivariate time series such that in each pair one of the time series Granger causes the other. The authors apply the approach in analysing EEG and fMRI data to show meaningful conclusion. Strengths: The paper addresses an interesting problem of finding latent variables with Granger causal structure from a multivariate time series. The paper is generally well written. Weaknesses: Although the paper proposes an intriguing approach for finding latent causal structure, in general, it might be limiting since it only considers structure with pairwise time series demonstrating causal influence on each other while in practice we would expect multiple time series potentially affecting each other. Can the authors elaborate the effectiveness of the assumed latent structure a bit more? The simulated data does not provide complete insight of the performance of the method. It might be useful to explore more realistic situation such as more time series, more intricate (conditional) causal influences among multiple time series, and/or situations where the proposed model might fail to capture causality, e.g., one time series driving more than one time series. This might be helpful in better assessing any false positive detections and false negative misses. It will be useful to compare the proposed method to standard conditional Granger causality on the real dataset to extract the underlying causal structure and assess if it is similar to the one inferred by the proposed approach. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - how are the maps in Figure 3 and 4 generated, using w and v vectors or using A matrix? - line 114 and line 142: if y_1 and z_1 are being explicitly removed, then how can s_2 -> s_3 be identified after s_1 -> s_2? can you please elaborate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors address limitation of this approach, e.g., in the context of driving time series driving only a single driven time serie s. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Questions ### Q1. How are the maps in Figure 3 and 4 generated, using w and v vectors or using A matrix? The maps are generated using the A matrix (i.e., the forward matrix). This is motivated by the literature [1,2] which argues that the forward models, by depicting the activity that is recovered by a spatial filter, lends to better interpretability. In other words, the maps depicted in Figures 3 and 4 represent the brain activity that is expressed by each of the recovered components. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J. D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage, 87, 96-110. [2] Parra, L. C., Spence, C. D., Gerson, A. D., & Sajda, P. (2005). Recipes for the linear analysis of EEG. Neuroimage, 28(2), 326-341. ### Q2. Line 114 and line 142: if y_1 and z_1 are being explicitly removed, then how can s_2 -> s_3 be identified after s_1 -> s_2? can you please elaborate? Only the driving signal y_1 is explicitly removed. The driven signal z_1 is *not* regressed. Apologies for the confusion -- in the statement: "This takes the form of a spatiotemporal regression such that any signals that are correlated with y_1(t) or its lagged versions y_1(t-l), l=1,...,L are removed. Given that this includes z_1(t), the driven signal is not explicitly removed", the phrase "Given that this includes z_1(t)" is not correct and will be removed from the text. The intent of the statement was that the $s_1$ component that is present in $z_1$ will be removed by regressing out $y_1$. To understand the proposed deflation scheme, consider the case of a VAR(1) system where s1→s2 and s2→s3: $s_1(t) = a s_1(t-1) + \epsilon_1(t)$ $s_2(t) = b s_1(t-1) + c s_2(t-1) + \epsilon_2(t) $ $s_3(t) = d s_2(t-1) + e s_3(t-1) + \epsilon_3(t)$ Assuming that $y_1$ recovers $s_1$, after iteration 1 of the algorithm, we regress out [$s_1(t)$, $s_1(t-1)$], leaving us with: $\tilde{s}_1(t) = \tilde{\epsilon}_1(t)$ $\tilde{s}_2(t) = f \tilde{s}_2(t-1) + \tilde{e}_2(t)$ $\tilde{s}_3(t) = g \tilde{s}_2(t-1) + h \tilde{s}_3(t-1) + \tilde{e}_3(t)$ where $\tilde{s}_i$ denotes the new values of the source signals resulting from the regression. Now what is left is the s2→s3 relationship. ## Weaknesses ### Although the paper proposes an intriguing approach for finding latent causal structure, in general, it might be limiting since it only considers structure with pairwise time series demonstrating causal influence on each other while in practice we would expect multiple time series potentially affecting each other. Can the authors elaborate the effectiveness of the assumed latent structure a bit more? We acknowledge that the approach assumes a specific type of signal model. On the other hand, this type of model captures some important "use-cases", for example EEG and MEG. In both these modalities, the physics of the forward problem dictates that connected sources (i.e., dipolar current sources in the cortex) are linearly mixed in the sensors due to volume conduction. Apart from encephalography, the finding of canonical resting-state networks in the GCs of the fMRI analysis suggests that this model is also appropriate in BOLD-fMRI. Thus, the proposed method appears to be well-suited to at least the most common forms of brain imaging. It is also our hope that the central idea of finding components that maximize Granger Causality can be generalized in the future to include more flexible signal models, non-linear interactions, and alternative deflation schemes. ### The simulated data does not provide complete insight of the performance of the method. It might be useful to explore more realistic situation such as more time series, more intricate (conditional) causal influences among multiple time series, and/or situations where the proposed model might fail to capture causality, e.g., one time series driving more than one time series. This might be helpful in better assessing any false positive detections and false negative misses. Agreed. Improving the simulations to consider systems with more sources, and where the assumed signal model does not hold, will be added to a revised version of the paper. ### It will be useful to compare the proposed method to standard conditional Granger causality on the real dataset to extract the underlying causal structure and assess if it is similar to the one inferred by the proposed approach. Agreed. We have added the results of conventional Granger causality to the supplemental page (see panels a and b of the accompanying PDF), where the causality matrix is shown for both the original (electrode) data (panel a, 64 x 64) as well as the recovered components (panel b, 2 x 2). In other words, we measured conventional Granger causality on the original data, and then again on the components found by the proposed method. The results suggest that: - The strength of the Granger causality is much higher for the components (0.32 for $y_1 \rightarrow z_1$ and 0.18 $y_2 \rightarrow z_2$) for the example here compared to the raw electrodes (largest value is 0.12). - It is very difficult to infer the structure of the system from the 64-by-64 causality matrix measured on the raw electrodes. - It is interesting that the second driving component ($y_2$) drives *both* $z_1$ and $z_2$ strongly, indicating a relationship of the form: $z_2 \leftarrow y_1 \rightarrow z_2$. This actually indicates that the method *is* capable of finding connections of this form in some cases, but that this requires integration of the information across multiple pairs of components. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: I would like to thank the authors for the detailed comments, and reporting the additional analysis. I believe that adding these details will improve the quality of the paper significantly.
Rebuttal 1: Rebuttal: ## Author response to all Reviewers We are grateful for the thorough reading and helpful feedback from all of the Reviewers. Detailed responses to each Reviewer's feedback are provided separately. This response describes the figures that have been included in the additional PDF, and highlights the most salient points of the author response. ### Comparison of proposed method to standard Granger causality on the real EEG data set *suggested by Reviewer BFyu* The suggestion led to an interesting finding. Namely, the algorithm has identified a structure of the form $z_2 \leftarrow y_2 \rightarrow z_1$. This is evident in the causality matrix of the recovered components (listed here for the right motor imagery data and shown in panel b of the attached PDF): $ \left( \begin{array}{cc} G_{y_1 \rightarrow z_1} & G_{y_1 \rightarrow z_2} \newline G_{y_2 \rightarrow z_1} & G_{y_2 \rightarrow z_2} \end{array} \right) = \left( \begin{array}{cc} 0.32 & 0.07 \newline 0.16 & 0.18 \end{array} \right)$, where $G$ is the strength of causality, defined as: $G = 1-\frac{\left< \epsilon_{f} \right>^2}{\left< \epsilon_{r} \right>^2}$, where $\epsilon_{f}$ is the residual of the full regression model and $\epsilon_{r}$ is the residual of the reduced model. $G$ is bounded between 0 and 1, with 0 indicating zero GC. The causality matrix indicates that component $y_2$ drives both $z_1$ and $z_2$. Interestingly, $y_2$ is a component with a topography over the left frontal electrodes, whereas $z_1$ is concentrated over the right central region and $z_2$ is focused over the left central electrodes. This is consistent with a planning/premotor circuit (i.e., $y_2$) driving both activation of the cued motor circuit (the left motor cortex $z_2$) as well as *inhibiting* the ipsilateral circuit (the right motor cortex $z_1$). This interpretation is also consistent with the power spectra of the components (*suggested by Reviewer 57Zq* and shown in panel d, lower right, of the attached PDF): - $z_2$ has low alpha power (desynchronization = activation) - $z_1$ has high alpha power indicating more (synchronization = inhibition) The finding of $y_2$ driving both $z_1$ and $z_2$ contradicts one of the stated limitations in the paper: *A limitation of Algorithm 1 is that regressing out the driving signal after each iteration prevents one from being able to identify connections of the form $s_2 \leftarrow s_1 \rightarrow s_3$* The caveat here is that the structure $z_2 \leftarrow y_2 \rightarrow z_1$ spans *multiple* latent pairs. To evaluate the standard approach to Granger causality, panel (a) in the attached PDF depicts the causality matrix of the individual electrode signals. The maximum value is 0.12, and it's difficult to ascertain the system structure from inspection of the matrix of causality values. ### Comparison to MVARICA *suggested by Reviewer 57Zq* Motivated by the suggestion to compare the proposed method to a component analysis technique that employs the VAR model, we employed the SCOT toolbox [1] to test the MVARICA technique on the real EEG dataset. To our surprise, the causality matrix produced by the resulting components had all values $<0.01$. This may be a consequence of working with the VAR residuals to perform the component decomposition. In any case, the spatial topographies produced by this comparison technique are shown in panels (e) and (f) of the attached PDF. Although there is a clear structure in several of the components (i.e., many components are expressed over frontocentral and central electrodes), the topographies are less smooth, and more importantly, do not appear to lateralize with the side of the cued hand. [1] Billinger, M., Brunner, C., & Müller-Putz, G. R. (2014). SCoT: a Python toolbox for EEG source connectivity. Frontiers in neuroinformatics, 8, 22. ### Panel legend *all panels pertain to the EEG dataset* (a) Causality matrix measured between all pairs of electrodes. Rows (columns) depict the driving (driven) signal. The value of each element is the strength of causality. The largest value on this dataset (right motor imagery) is 0.12. Shown is the causality matrix for the right motor imagery condition. (b) Causality matrix measured between the recovered Granger components for the right motor imagery condition. (c) Top row: power spectra measured for selected electrodes, shown for the left motor imagery condition. Spectral analysis was performed with the multitaper method with 7 Slepian tapers. The power spectra are unnormalized to facilitate comparison of the SNR between the raw electrodes and recovered components. Characteristic "bumps" are evident over the delta (1-3 Hz) and alpha (8-13 Hz) region. Bottom row: Power spectra measured for the recovered Granger components. Note the large increase in SNR compared to the individual electrodes. (d) Same as c but now shown for the right motor imagery condition. Note that the power spectra of the Granger components are remarkably consistent with that of the left motor imagery condition. (e) The spatial topographies of the first six components as measured by the MVARICA method on the left motor imagery dataset. (f) Same as e but now shown for the right motor imagery condition. Note that the topographies do not clearly lateralize with the side of the cued hand. ## Response to Ethics Review Please note that the omitted citation to the fMRI dataset was deliberate and served to protect the identity of the authors. The dataset has been previously published and all experimental procedures were approved by the institutional review board of the home institution. This will be included after the review process. Regarding the EEG dataset that was obtained from the [GigaDB](https://www.re3data.org/repository/r3d100010478) website (for which a citation to the paper was included), the terms of use of GigaDB indicate that the data is public domain and has a CC license. Pdf: /pdf/6a7e93112db75d9bed2e8be2be4ac99f79603e2b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ContinuAR: Continuous Autoregression For Infinite-Fidelity Fusion
Accept (poster)
Summary: The author proposes a general auto-regression model for multi-fidelity fusion. By simplifying the ODEs over the fidelity indicator in a linear form, close-form solutions can be derived. And the computational efficiency can be further improved using a rank-1 approximation. The experiment results also show superior performance of the method. Strengths: 1. A general linear fidelity differential equation is proposed. It serves as an simplified version of IFC and generalized version of IMC. 2. Close-form solutions provide a more efficient way to conduct multi-fidelity fusion with infinite fidelities, especially for high-dimensional problems. 3. The effectiveness of this approach is demonstrated through the testing of both simulated and real-world data. 4. The paper exhibits a clear structure and is straightforward to comprehend. Weaknesses: 1. In experiment section, it seems the baseline methods are tested with default settings without fine tuning. The comparisons are not completely fair. 2. One big benefit using the ODE formulation is to extrapolate since the ODE formulation can capture the underlying dynamics between different fidelities. In the paper, there is no discussion regarding this matter. 3. \eta is set 0.5 and 0.75 in all experiment settings. It might be a bit high. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you also provide some fidelity interpolation and extrapolation results? 2. Could you also give some results with much less high-fidelity samples. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitations in the paper. I do not believe there is any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer 4 **Could you also provide some fidelity interpolation and extrapolation results?** We have tested this functionality and found the interpolation results to be accurate as expected. We did not include these results due to the limitation of the space and the motivation of this work---proposing a new paradigm for infinite multi-fidelity fusion based on a tractable model. Our experiments aim to convey the advantages over the traditional approaches in terms of the main concern: accuracy and computational cost. We agree with the reviewer very much and will investigate more of this aspect, particularly for applications like Bayesian optimization. **"Could you also give some results with much less high-fidelity samples."** Thank you for your valuable suggestions. Such an investigation is implicitly included in our Cost Evaluation experiment (Fig.5 in the main paper). The number of training profiles (from low- to high-fidelity) is [10, 62, 33, 4, 1], [41, 19, 50, 6, 9], [37, 3, 26, 32, 6] for the first three points in Fig.5. We can see that even with 1 highest-fidelity training sample, our method can still achieve a relatively good performance compared with other baselines. It also highlights the importance of choosing a proper training data profile. For instance, the 3rd point always outperforms the 2nd point in RMSE despite that the 2nd point has more highest -fidelity training samples. We will add more discussions on this in the revision if the space allows. **"\eta is set 0.5 and 0.75 in all experiment settings. It might be a bit high."** Thank you for the comments. We will investigate a smaller $\eta$ in future work. One challenge is that when $\eta$ is small, the number of training data for the highest fidelity will be very small. For instance, for a 5-fidelity setting, if $\eta$ is 0.25, the number of training data for the highest fidelity will be 1/1024 of that for the lowest fidelity. We are currently solving this issue by letting the model decide $\eta$ automatically for each fidelity. However, we believe that is beyond the scope of this work and will leave it for future work.
Summary: Multi-fidelity models are widely used for combining training data obtained from information sources with different degrees of precision or accuracy. More specifically, this allows for the combination of greater quantities of noisier but more cheaply-obtained examples with more faithful (but limited) data. In this work, the authors describe an extension to infinite-fidelity fusion that incorporates information contained within the fidelity indicator itself, while also mitigating issues relating to training time and complexity, as well as scalability to high-dimensional outputs. The authors also formulate a surrogate model than unifies a large selection of pre-existing multi- and single-fidelity models. Experiments on synthetic and real-world data indicate that the model obtains significant performance improvements over competing techniques, without incurring an unreasonably large speed penalty (compared to IFC). Strengths: - The problems investigated in this work, along with the associated solutions, are non-trivial, and the authors diligently include detailed derivations for all their contributions. - The improvements over IFC are well-motivated in this work, and I appreciated how there was a strong emphasis on computational complexity and training stability. Both of these are highly prized by practitioners, and I would expect the performance improvements reported here to be transferrable to other problem domains as long as the training process is stable. Weaknesses: - The paper is currently quite dense and difficult to follow at times. While I appreciate that the authors present several varied contributions here, I believe the presentation of the main paper could be improved further to highlight the key takeaways while deferring detailed derivations to the supplementary. - The paper bears very strong writing similarities to *GAR: Generalized Autoregression for Multi-Fidelity Fusion* by Wang et al., where some sentences are nearly copied in their entirety with only a single word replaced here and there. This is especially noticeable in the *Introduction* and *Background* sections of the paper, as well as some of the *Related Work*. The contributions themselves are different, although I am surprised that this 2022 paper is only given a cursory reference given the degree of similarity in the problem statement and experimental set-up. - Maybe I missed this while reading the paper, but while is the IFC method listed as IFC-GPT in the figures and tables? - A handful of limitations for this method are listed at the very end of the paper, but these currently come across as an afterthought. I would prefer to see additional ablation studies or synthetic examples showing specific situations where the proposed models may not work as well as expected. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: I have listed my concerns with the paper in the *Weaknesses* section. I encourage the authors to focus on these comments when continuing the discussion on the paper. The contributions in this paper are insightful, and could inspire further research within the community. However, I currently have major concerns on the writing and novelty of the paper (which are very similar to a pre-existing paper that is only trivially referenced in the submission - I am personally not comfortable with this degree of overlap), as well as clarity and presentation. In view of the above, I am currently inclined towards rejecting this submission, but I look forward to reading the feedback from other reviewers as well as the author rebuttal. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: There shouldn't be any immediate negative societal impact resulting from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **“main paper could be improved further to highlight the key takeaways”** We agree with the reviewer. However, feel this to be a challenging task since our work is quite theoretical as it tried to revise the classic multi-fidelity autoregression and extend it to a tractable form for infinite fidelity fusion by introducing the concept of differential equations. In the meanwhile, our method is highly application-oriented. It is difficult to balance technicality and accessibility. We endeavor to improve the presentation by: 1). Highlighting the theoretical novelty of the proposed linear differential equation approach versus the non-linear approach at the introduction and the end of Section 3.1; 2). Pointing out the connection between the proposed method and GAR by the end of the introduction and Section 3.3; 3). Revising our abstract to highlight the novelty of the infinite fidelity problem instead of mentioning many other less-significant aspects; and 4). Shortening the derivations in the main paper and moving the detailed derivations to the supplementary materials. **“The paper bears very strong writing similarities to GAR: Generalized Autoregression for Multi-Fidelity Fusion by Wang et al…”** We appreciate the review for pointing this out and will revise our manuscript according for a better presentation. We did learn a lot from the GAR paper in formulating the motivation, problem definition, and some related work due to the very close connection between these two works---both works are essentially extensions of the foundation AR model but for different types of problems (one for infinite-fidelity problem and one for non-aligned high-dimensional problem). There are also significant differences in the method and the novelty. To resolve the reviewer’s concerns, we will make this very clear in the introduction and methodology section. We will include discussions on the detailed connections including 1) the way to handle high-dimensional outputs and non-subset data structure and 2) the particular setting to turn GAR a special case of the proposed method. **“Maybe I missed this while reading the paper, but while is the IFC method listed as IFC-GPT in the figures and tables?”** As presented in their original work, there are two variations of IFC, one with deep learning (IFC-ODE) and the other one with Gaussian process ODE (IFC-GPODE), which shows better results in the original manuscript. Thus, we show the results of IFC-GPODE. We will make this statement in the experimental section clearly. Due to our carelessness, we use the wrong name IFC-GPT (the name used in the original IFC codes) rather than IFC-GPODE. We will correct this in the revision. **"additional ablation studies or synthetic examples"** Thank you for your suggestions. We did not find particularly poor performance of our method in the many-fidelity setting when compared with other methods. To understand the limitation, we follow the classic subset experiment setting with η=0.5 and reduce the number of fidelity from five to two. The results are shown in the extra PDF file Fig.1. In this case, we often see that our approach is almost identical to the classic AR, losing its advantages as an infinite-fidelity fusion method. Also, when the number of training data is scarce, our method does not perform better compared to other baselines. There are certainly other factors that may affect the performance, such as the choice of fidelity and B(x). We will investigate this in future work. We believe that the current experimental results are sufficient to demonstrate the main advantages of the first tractable infinite-fidelity fusion method that is orders of magnitude faster than the only existing infinite-fidelity fusion method. **“The contributions in this paper are insightful… I have major concerns on the writing and novelty of the paper”** Thank you for the valuable comments. We appreciate the reviewer for seeing the actual contribution of this work, particularly on the novelty and the insight into the classic multi-fidelity fusion problem. The multi-fidelity fusion has been an important topic in the surrogate modeling community with many real-world applications. This work contains significant novelty by proposing the first tractable infinite-fidelity fusion (with one benefit being orders of magnitude faster than the only existing intractable infinite-fidelity fusion). We believe that this is a significant contribution to the community. We will revise the manuscript to highlight the novelty of the proposed method. Due to the very close connection between this work and GAR, the presentation did show a certain level of overlap with GAR. We will revise the manuscript to make this very clear in the introduction and methodology sections. We will include discussions on the detailed connections. We would like to humbly bring the reviewer’s attention to the work itself and its contribution. We believe this work is practical and useful for many research (such as uncertainty quantification and Bayesian optimization) and we will open-source the codes to benefit the community. We will do our best to improve the writing and presentation. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal. Comment: I would like to thank the authors for carefully replying to all reviews. Although the authors set out several reasonable action points for improving the quality of the paper, I believe the required revisions are substantial enough to require an additional round of reviewing. Consequently, my vote still tends towards rejection as I believe this work would benefit from resubmission first. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's valuable time and effort in evaluating our work. However, we are now more confused by the reviewer's judgment after the rebuttal. The required revision is mainly to improve the writing and make it more accessible to the reader. We have also supplied some experiments, but those experiments are not crucial to this work as they are simple two-fidelity problems whereas this work focuses on many-fidelity problems. The additional experiments do not alter the conclusion and novelty of this work at all. We are confused about how can such a revision be substantial enough to require an additional round of reviewing. We understand that the writing of this work can be improved accessibility. However, as the reviewer also agreed, the contributions in this paper are insightful. The contributions are not altered by the revision. We would like to humbly urge the reviewer to reconsider the decision Thank you again for your time and effort. We do really appreciate it and we would like to kindly ask for your support.
Summary: This paper presents a Gaussian process (GP) based multi-fidelity model that makes use of fidelity indicators. This paper extends the autoregression two-fidelity formulation to a linear fidelity differential equation. By assuming the lowest fidelity function and all the residual functions follow GP, a joint GP model of all the fidelity can be derived. To allow flexible kernel choices, the integral in the kernel function is approximated with Monte Carle samples. In the case of multi-dimensional observations, the proposed model assumes a coregionalization formulation. To further speed up inference, the proposed model relies on the assumption that the inputs of high fidelity are a subset of the inputs of low fidelity. The proposed method compared to state-of-the-art multi-fidelity methods on both synthetic and real data sets and shows significant improvement on mean prediction accuracy. Strengths: - This paper extends the common autoregression multi-fidelity formulation to a linear linear fidelity differential equation, which results in a joint GP model over the observations of all the fidelity. - The proposed method makes an explicit assumption about the role of fidelity indicator in the model, which allows it to use this information for modeling. - With a sophisticated GP model, the proposed method requires less training time and works better with low data compared to the neural network based multi-fidelity method. - The proposed method significantly outperforms state-of-the-art multi-fidelity methods on both synthetic and real data. Weaknesses: - The modeling assumption in the linear fidelity differential equation formulation is quite restrictive, which may not be applicable for many real world problems. For example, this model is not very effective if low fidelity data is only good at certain area, i.e., the knowledge transferring factor needs to depend on input x. - For simplicity, the proposed method assumes that $\beta(t)$ is a constant. This means that the knowledge transferring factor is full determined by the fidelity indicator, which may be too restrictive for the use case where the fidelity indicator only shows the order of fidelity not the relative quality. - The proposed method jointly models the observed data of all the fidelity under a single GP model, which does not scale well when a lot of data are available. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In the experiments, the authors attribute the performance improvement to the usage of fidelity indicator. However, compared to AR, two factors potentially contribute to the performance difference: the joint modeling of all the fidelity data (instead of pairwise modeling of two consecutive fidelities) and the explicit usage of fidelity indicator in the linear fidelity differential equation. I wonder which factor is more important to the performance difference. - A big benefit of using GP is uncertainty quantification. In the experiments, only the accuracy of mean prediction is compared. I wonder what is the performance of the proposed method in terms of test log-likehood compared to the methods like AR. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitation of the proposed method has not been sufficiently discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **"which factor is more important to the performance difference."** Thank you for your insightful comment! We did some investigation into the issue and we believe that the main contributing factor is joint modeling of all the fidelity data. We find this by giving a 1st- and 2nd-order polynomial form for B(t) function. This can be understood as a simple transformation of the fidelity indicator t so that it can take more different values. We did not observe a significant improvement in the performance in most experiments. One possible reason is that the fidelity indicator t is already a good representation of the fidelity information. We will investigate this in future work. We believe that such investigation will lead to a new understanding of fidelity information and the infinite-fidelity fusion, leading to a more effective model. **"test log-likelihood compared to the methods like AR"** Thank you for your insightful comment! We agree that as a probabilistic model, the log-likelihood is an important metric to evaluate performance. We have included some additional results in rebuttal PDF Fig.2, which shows the negative log-likelihood (without the constant term and thus the result can be negative) of our method, ResGP, and AR. Surprisingly, the NLL of our method is better than ResGP and AR more significantly that the comparisons in RMSE. We believe that this is because the log-likelihood is more sensitive to the uncertainty of the prediction. Since our method is a joint learning (as discussed in the previous question), it is able to capture the uncertainty as a joint model whereas the other methods treat each fidelity separately. We will add this new finding to the supplementary materials along with some discussions. **"this model is not very effective if low fidelity data is only good at certain areas..."** Thank you for your valuable insight! We agree that the proposed method is to some extent limited in terms of model capacity for the tradeoff for tractability. Most SOTA methods choose to sacrifice tractability by using nonlinear mapping from low-fidelity to high-fidelity. We are investigating the possibility of using more flexible non-linear mapping while maintaining tractability. One particular direction we are working on is to derive an explicit form of nonlinear mapping using equation discovery techniques (such as SINDy). **"$\beta(t)$ is a constant... is restrictive"** Thank you for your insightful comment! As mentioned earlier, we have tested a polynomial $\beta(t)$ trying to improve the performance. However, the improvements are not significant. We believe this is because the fidelity indicator t is already a good representation of the fidelity information. We will investigate this in future work. **"does not scale well when a lot of data are available."** Indeed, the proposed method will suffer from the scalability issue when the number of data is large. However, there are already good solutions to resolve this issue. For example, we can use inducing point-based sparse GP to reduce the computational complexity or use tensor algebra to reduce the computation provided that the data has a certain structure. We will add this discussion to the revision. Thank you for your insightful comment!
Summary: The paper proposes an implementable and concrete version of Li et al's infinite dimensional fidelity DE, an method of fusing simulations at different levels of fidelity/resolution, to trade off between computational tractability and statistical accuracy. Strengths: if i understand correctly the "infinite-fidelity" model of Li et al has many attractive data fusion properties for varaible-resolution simulation, but is not implementable. The claim of this paper is that an this specific parameterisation to the infinite-fidelity approach can be implemented with the GP induced by a linear ODE with a GP prior over functional inputs (to the ODE), which induces a GP posterior. Some fancy work with inducing points is done to make this tractable in practice. Weaknesses: many small typos, and some odd phrasing that undermine my confidence in the results. See questions. The paper seems not to be about design of experiments but rather sharing uncertainty between low and high-fidelity simulations that have already been performed, and yet lacks a justification for the informativeness of the low-fidelity simulations. I suspect this paper could be great with some typo- and bugfixes, but in the current form I hesitate recommend with confidence. There are too many confusing things to be sure I have understood the paper correctly. I think a simple diagram or two could have made this much clearer. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: equation (1) is central in the paper, and yet I don't understand it. Perhaps I am misreading, or perhaps a typo? in what sense is it _auto_-regressive? This seems to be a linear relation between two fidelities, $t=0,t=T$. Do you want it to relate two variable fidelities, not simply the two fidelities $i=0$ and $i=T$? Things look more autoregressive in eq (4) but I'm not sure I understand that either. What is $\Delta$ doing here? Maybe this is a notational quibble, but It is multiplying the difference between $t_T$ and $t_0$. If we let $\Delta(t_T-t_0)\to 0$, does that mean that $t_T\to t_0$ *and* $\Delta\to 0$ simultaneously? I think it makes more sense if we delete $\Delta$ and let $t_T\to t_0$, esp in the light of (5) Also, what is the relationship between times and fidelities? Does the system we are looking at have a time dimension, and each simulation run has a different time and spatial discretization? In which case, how do I interpret something like $t_T$ as in eq(4)? Shouldn't each fixed fidelity have a _sequence_ of different timesteps at which the entire simulation is evaluated? in (19) our virtual sites look a lot like the inducing points of sparse GPs. Is that how I should be interpreting them? l70:"the system inputs of higher-fidelity are chosen to be the subset of the lower-fidelity, i.e., $\mathbf{X}^T \subset \cdots \subset \mathbf{X}^2 \subset \mathbf{X}^1$." OK, I think I must be confused; what exactly is being subset here? If the higher-fidelity model is sampled over a denser mesh than the lower one, for example, then its data points should be, if anything, a superset of the mesh points of the lower-fidelity model. So I guess it is not mesh points; what is in a subset relation with what then? l177: I got lost trying to understand the subset selection here. Can you diagram it? I suspect this is very simple, but I just can't parse the sentence "setting, such a requirement is not practical. Here, we derive a decomposition by introducing virtual observations $\hat{\mathbf{Y}}$ for each fidelity such that $\mathbf{Y}^{(T)}$ satisfies the subset requirement for the completed set $\left\{\mathbf{Y}^{(T-1)}, \hat{\mathbf{Y}}^{(T-1)}\right\} . \check{\mathbf{Y}}^{(T)}$ is the part of $\mathbf{Y}^{(T)}$ that forms the subset of $\mathbf{Y}^{(T-1)}$ (with a selection formulation $\check{\mathbf{X}}^{(T)}=\mathbf{E}^{(T)} \mathbf{X}^{(T-1)}$, where $\mathbf{X}^{(T-1)}$ corresponds to the previous-fidelity outputs $\left.\mathbf{Y}^{(T-1)}\right)$." Generally are we missing something from the framing? What even is the fusion problem, if we already have a fixed library of simulations? If I have run a high-fidelity simulation already then I have a maximally dense mesh of points for some version (with maximal $T$; Then what do I gain by fusing it with lower-fidelity runs also? If the simulation is deterministic, which it seems to be, then we might as well just take that and go home. I thought that the fusion setting made sense in an _adaptive_ design-of-experiments setting where we might start from low fidelity model and up-sample as necessary to reduce overall uncertainty until we are satisfied. the model in this paper seems to assume a fixed set of simulation runs then pool them; but why would we do this, rather than simply throw out all but the highest-fidelity model? Is there some quantification of uncertainty we get from the lo-fi models which is not apparent at the high fidelity? Bonus question about related research: The setting of the LiFiDEs, as one of the "tractable fusion" methods (l222) looks a lot like the "probabilistic numerics" setting -see e.g. https://www.probabilistic-numerics.org/research/pde/ where there are multi-resolution and meshless methods. Can you position this work in relation to that literature? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors are transparent about the limitations of the method. Could probably test in settings where the model likelihoods are poorly approximated by Gaussians. Possibly some of the example problem do this; I have not have time to check the appendices to confirm this, however. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **“What even is the fusion problem”** We appreciate the reviewer for providing such valuable feedback to us. Please allow us to clarify the fusion problem here. The goal for multi-fidelity fusion is to accurately predict the output of $f(\mathbf{x},T)$. Here $f$ is the simulator, $\mathbf{x}$ is the system input, and $T$ highest fidelity indicator. Note that $\mathbf{x}$ is the system input (such as the attack angle for an airfoil) NOT the space locations; T is the fidelity indicator having nothing to do with time. Traditional methods approximate $f(\mathbf{x},T)$ using many simulations for different $\mathbf{x}$ at fidelity T, which is computationally expensive. Fidelity Fusion methods, for instance AR, decomposes $f(\mathbf{x},T)=\beta f(\mathbf{x},T')+u(\mathbf{x},T)$ such that $f(\mathbf{x},T)$ can be approximated using many simulations at low-fidelity $T'$ (to approximate $\beta f(\mathbf{x},T')$) and a few simulations at high-fidelity $T$ (to approximate $ u(\mathbf{x},T)$). **“equation (1) is central…”** Thank you for your question. We double-check to confirm that Eq (1) is correct. It is equivalent to Eq (4). It is called autoregression because the highest-fidelity (indicated as T) prediction relies on the prediction of the low-fidelity (indicated as 0) solution plus some residual. It is indeed a linear relationship. **“What is Δ doing here?”** Thank you for your insight! Our initial thought is to use $\Delta(t_T-t_0)$ to denote the difference between the solutions at two fidelity $t_T$ and $t_0$ as a function of the fidelity difference $(t_T-t_0)$. We now realize that ∆ is redundant here. We will remove ∆ in the revision. Thank you again! **“relationship between times and fidelities?”** We apologize for the confusion. We denote the fidelity using factor $t$, which has nothing to do with time. We do not consider time as a variable in this work. Instead, the values at particular time stamps and space locations are recorded to form the output (QoI) vector $y$, which contains the key information of the entire simulation evolution. This is a common workaround for learning a spatial-temporal field from complex simulations [1]. [1] S. Conti and A. O’Hagan, “Bayesian emulation of complex multi-output and dynamic computer models,” Journal of Statistical Planning and Inference, vol. 140, no. 3, pp. 640–651, Mar. 2010 **“(19) look a lot like the inducing points of sparse GPs.”** Yes, thank you for your insight! It is very similar to the sparse GPs but there are also some differences. The difference: the inducing points of sparse GPs are introduced to reduce the size of the kernel matrix whereas the inducing points in our work are introduced to fulfill the subset requirement. The similarities: if the inducing points of sparse GPs are assumed Gaussians, they can be integrated out as is shown in [2]; the inducing points naturally admit a Gaussian distribution because they are predictions of the low-fidelity GP. The challenge becomes how to integrate them out and also decompose the large kernel matrix using the subset structure. We will add this discussion to our revision. Thanks again for your comments. [2] M. Titsias, “Variational Learning of Inducing Variables in Sparse Gaussian Processes,” AISTATS, PMLR, Apr. 2009, pp. 567–574. **“the system inputs of higher-fidelity are chosen to be the subset of the lower-fidelity”** As we try to clarify at the beginning, $x$ is the system inputs rather than the spatial locations. Thus, the subset setting means that the system inputs for conducting high-fidelity simulations are a subset for those for low-fidelity simulations. This is intuitive as one can normally run many low-fidelity simulations and fewer high-fidelity simulations that are crucial for some goals, such as optimization. Also, note that we do not consider the mesh/resolution difference between different fidelities. This is achieved by recording values at some pre-defined spatial-temporal locations based on interpolations on the simulation results. **“I got lost trying to understand the subset selection”** We apologize for the confusion. We will add a diagram of the subset and non-subset structure in the supplementary materials. Also see Fig.3 in the rebuttal PDF. The idea is intuitive. let $\mathbf{X}^{(t)}$, be the available system inputs for $t$ fidelity. The inputs for $t$ fidelity will be two parts: the subset part $\check{\mathbf{X}}^{(t)}$ contained in $\mathbf{X}^{(t-1)}$, and the part that is not contained in $\mathbf{X}^{(t-1)}$, denoted as $\hat{\mathbf{X}}^{(t-1)}$ (where the hat and superscript indicate that it is a complement set for the $t-1$ fidelity). To extract these two parts, we define $\check{\mathbf{X}}^{(t)}= \mathbf{E}^{(t)} \mathbf{X}^{(t-1)}$ and $\hat{\mathbf{X}}^{(t-1)}= \hat{\mathbf{E}}^{(t)} \mathbf{X}^{(t)}$. **"Bonus question about related research..."** We are impressed by the reviewer’s knowledge. Probabilistic numerics and multi-fidelity fusion (as special types of surrogate models) both involve probabilistic methods to approximate or replace deterministic computations. However, the key difference is that probabilistic numerics views numerical problems as statistical inference problems and aims to provide uncertainty estimates along with the solution, whereas surrogate models provide a computationally efficient approximation to the expensive simulation. The setting of the LiFiDEs looks like the probabilistic numerics but differs in that the differential operator is an assumed model whereas the differential operator is specified by the target PDE in probabilistic numerics. Despite that the probabilistic numerics is meshless, it still relies on solving a modified PDE at collocation points, which equivalently defines the fidelity factor t in our work. There is a potential to use our method to accelerate solving probabilistic numerics. We will add such discussions to the revision with related references of probabilistic numerics. --- Rebuttal 2: Comment: Dear Reviewer KCHb Could we kindly know if the responses have addressed your concerns or if further explanations or clarifications are needed? Your time and efforts in evaluating our work are greatly appreciated! We would like to do our best to present our work clearly and make solid contributions to the AI community. Kindly regards
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort of the reviewers. The valuable comments will be absorbed into our revision. Here we supply some additional graphical information to address the reviewers' concerns. The first part is about additional experiment results where Fig 1 shows the classic subset experiment setting with η=0.5 with reducing the number of fidelity from five to two and Fig 2 shows the test negative log-likelihood (without the constant term and thus the result can be negative) of our method, ResGP, and AR with 5 fidelities subset experiment setting η=0.5. The second part (Fig 3) illustrates the notation system for the subset data structure. Pdf: /pdf/46b3fa4bf1206a609bec38751bc2710d1e7c81ed.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
BayesDAG: Gradient-Based Posterior Inference for Causal Discovery
Accept (poster)
Summary: BayesDAG proposes a hybrid SG-MCMC sampling and variational inference for drawing samples from the posterior distribution of DAGs in the context of Bayesian structure learning. This work is closely related to a recently published "Yu et al., Dags with no curl, 2021" [NoCurl] where the space of DAGs is converted to the space of a skew-symmetric matrix, "W", and a potential vector,"p". The main difference is that [NoCurl] focuses on an optimization setting (to return a most probable DAG) while the focus of the current paper is on Bayesian inference (via sampling). They also mention that (for reasons that are not entirely clear to me) NoCurl approach optimization is challenging (due to uninformative gradients) and that this is due to the fact that the entries of the skew-symmetric matrix, W are continuous. To address this problem, they replace the continuous matrix W with a binary matrix. For this purpose, they slightly modify the theory presented in NoCurl (namely, replacing relu with step function). Then they reformulate the problem as a matrix permutation setting and finally propose an approximate solution for the latter formulation via Sinkhorn approach to learn latent permutations by the Gumble trick. Strengths: 1. This is a well-written paper addressing an important problem i.e. Bayesian structure learning. 2. The proposed approach is an interesting (albeit sophisticated) combination and modification of several algorithms and recent advances in the field. 3. Even though I did not follow why the continuous matrix W had to be replaced by a binary matrix in the first place, but I found the way they did it and managed to approximate its gradient, very interesting. Weaknesses: 1. As I mentioned in the summary section, a key contribution of this paper is the insight that it is better to replace the continuous matrix W (of NoCurl algorithm) with a binary matrix (see lines 101-104). But their justification for this claim does not seem convincing to me and should be explained better. To be more concrete: (a) Why should replacing a continuous matrix with a discrete matrix be helpful when we are relying on gradient information for optimization and sampling? (b) In line 100 they mention a "reported failure" of NoCurl. It would be great if the authors would provide a reference to where this failure is reported (Or if it is reported in the original NoCurl paper, the relevant section). 2. Given the close link of the present paper with the NoCurl approach, it would be great if the authors would compare their algorithm (with binary W) with an alternative approach where just like NoCurl, a continuous W would be used (and relu instead of step function, etc). 3. I see references to repositories that the authors have used but no link to their own code. Given that implementation of the proposed algorithm from scratch is by no means trivial, I encourage the authors to provide the code. Both for facilitating other researchers to use their algorithm as well as allowing the reviewers to check the reproducibility of the reported results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Given that the Sinkhorn approach is approximate, how do you guarantee the DAGness of your graphs ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback for our paper. We will address your concerns in the following: 1. **Advantages compared to no-curl**: Sorry about the ambiguity. We will make it more clearer in the revised paper. There are several reasons on why the binary matrix is better than continuous one: (i) Note that $ReLU(grad \pmb{p})$ gives a fully connected DAG. The main purpose of W matrix therefore is to disable the edges. Continuous W requires threshold to properly disable the edges, since it is hard for a continuous matrix to learn exactly 0 during the optimization; (ii) $ReLU(grad \pmb{p})$ already contains continuous values, and $W$ matrix also contains the continuous values. Thus, learning of the edge weights and DAG structure are not explicitly separated, resulting in complicated non-convex optimizations (see discussion below Eq.3 in [1]). With binary matrices (e.g. replacing with $Step$ function and binary $W$), we only focus on learning the graph structure, which significantly simplifies the optimization complexity. 2. **Reported failure of no-curl**: In the original No-curl work [1], they reported that direct optimizing the $W\cdot ReLU(grad \pmb{p})$ results in poor graph discovery performance (refer to rand init and rand p in Table 1 [1]). That is exactly the reason they propose to use NoTears as the initialisation and design complicated projection scheme to project initialisation to a DAG space. On the other hand, our approach does not require any projections and sampling with our modified objective can directly lead to a good graph discovery performance. 3. **Comparison of alternative approach**: To the best of our knowledge, our work is the only one that shares some similarities with No-Curl. It is important to note that our method is focused on a Bayesian causal discovery framework, while No-Curl is not explicitly designed for this purpose. Thus, No-Curl might not serve as the most appropriate baseline for our study. We further observed that No-Curl demonstrated very poor performance when directly optimized, as shown in Table 1 of [1]. This observation led us to reasonably assume that our method would outperform it. Regarding the No-Curl with the projection steps, it was found to have similar performance to No-Tears, as illustrated in Figure 1 of [1], but it is hard to adapt the projection step for Bayesian causal discovery. Dibs [2], a Bayesian causal discovery algorithm inspired by No-Tears, serves as a better comparison for our work. In most cases, our method exhibits superior performance compared to Dibs, further solidifying the effectiveness of our proposed approach. 4. **Access to code**: We have provided a link to AC containing the code and will open source it on acceptance. 5. **DAGness with approximate gradient**: Although the Sinkhorn is an approximation method, and can only output bi-stochastic matrix. To obtain a valid permutation matrix, we use Hungarian matching algorithm to ensure a valid permutation can always be obtained from this bi-stochastic matrix (see line 167). With a valid permutation matrix, a DAG structure can be guaranteed. We would once again like to express our gratitude to the reviewers for their valuable feedback and hope that these clarifications have effectively addressed your concerns. [1] Yu, Yue, et al. "DAGs with no curl: An efficient DAG structure learning approach." International Conference on Machine Learning. PMLR, 2021. [2] Lorch, Lars, et al. "Dibs: Differentiable bayesian structure learning." Advances in Neural Information Processing Systems 34 (2021): 24111-24123. --- Rebuttal Comment 1.1: Title: Reviewer R7H9: Continuous vs binary W and other issues Comment: Dear Reviewer R7H9, Have the authors addressed these issues raised in your review and does this change your assessment of the paper? --- Reply to Comment 1.1.1: Title: Further Questions Comment: Dear Reviewer, Thanks for your review. Note that we have made access to the code through the AC (which will be open sourced on acceptance) and also addressed your other concerns. If there are further questions, we are happy to answer them as well. If there are no outstanding questions and if we have addressed all your concerns, given that your comments overall seem to be positive regarding our work, we would appreciate if you could consider increasing your score.
Summary: The authors propose a method for the posterior inference of DAG structure *and* function parameters with potential applicability to arbitrary functional relations between nodes. The authors modify a novel characterization of DAGs, and interpret this characterization in terms of a sorting operation which can be relaxed to allow differentiability. The authors define priors on DAGs (in the alternative space) and function parameters and based on a specific model choice characterize likelihood. They use the resulting joint distribution to iteratively sample some parameters and conduct variational inference re. others. The authors examine the performance of their proposed methodology on various synthetic and real datasets. Strengths: - The paper is very well written. It presents the previous work, motivation for current research, and reasoning behind methodological choices very clearly. - The paper utilizes recent, previous research intelligently and presents concrete innovations to solve well-defined problems. - Posterior inference in the DAG structure and parameter space without some of the limitations of previous work is valuable and is likely to inspire future work. Weaknesses: - DAG model selection results have causal implications given specific model assumptions regarding generative model of the data. ANM is such a model assumption. However, it is unclear whether the identifiability results still apply in this case, given the priors defined on DAG structure and function parameters. I think the authors' work still would be valuable as only a DAG inference method; however, since the authors present their proposal as a causal discovery + inference method, this point needs further discussion. - I think the authors' presentation should be modified to make sure their inference method is more clearly understood. Given their initial presentation, including "posterior sampling" in the title, and frequent reference to Gibbs sampling throughout the text, leads the reader think that the authors will present results with a correct MCMC algorithm and produce a full posterior distribution. However, most promising results presented by authors include their iterative algorithm that samples from the posterior of some parameters and uses variational inference for others. This is fine as a methodological choice, but their presentation leads the reader to have higher expectations, which can become crucial depending on the use case of the reader. - Causal sufficiency assumption prevents using the current method in problems where unobserved confounding is likely. In my opinion this is acceptable given the difficulty of the problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are there any potential difficulties with using the authors' method with other SCM model assumptions / likelihoods? - What are grounds for baseline selection in experiments? I think this would be an important addition to the final text. - In 6.4, were there model misspecification in other methods as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors adequately address the limitations of their work overall, however see Weaknesses section above for some important caveats. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback for our paper. We will address your concerns in the following: 1. **Identifiability**: We would like to clarify that identifiability is a property of the SCM (the specific parametric form thereof), and not that of an estimation/inference method, which is what BayesDAG is about. With small amount of data, the prior can affect the search procedure, but it won't affect the identifiability since our priors over graphs are non-zero for any DAG. Namely, for each sampled graph and function parameters, the resulting ANM is identifiable, apart from some exceptions (e.g. linear Gaussian, etc.). With enough data (e.g.infinite data), the prior can be negligible and it will also not affect the search procedure (see Theorem 1 in [1]). Since our method can also perform causal inference task, our model is also causal inference identifiable. For each DAG and function parameters pair, we have a corresponding SEM, which the causal inference quantity can be obtained by manipulating the SEM. 2. **Clarity of the inference method**: Sorry about the ambiguity. But we want to emphasise that although the fully SG-MCMC approach does not give promising empirical results compared to the SG-MCMC+VI, proposing such framework itself is a contribution and requires further investigations regarding its inferior performances. We will make the presentation of the inference method clearer in the revised paper. 3. **Causal sufficiency**: We agree that causal sufficiency is a strong assumption in practice, but under the scope of this paper, we only consider the system without latent confounders. This is also a common assumption adopted by all our baselines and most previous work. Future work is needed to relax this constraint. 4. **Potential difficulty with other SCM**: It depends on which SCM has been chosen. The core assumption required by our method is that the SCM should be structurally identifiable. For example, post-nonlinear model, which is identifiable, can be used to replace the ANM model in our case. 5. **Baseline selection**: The selection criteria for baselines are the following: (1) It should be a Bayesian causal discovery method; (2) the baselines should cover both linear (BCD nets) and non-linear models (Dibs); (3) the baseline should have quasi-Bayesian method adapted from traditional causal discovery approach (BGES). These principles ensure a comprehensive set of baselines to demonstrate the effectiveness of our method. 6. **Model misspecification in 6.4**: Yes, our method, along with other baselines, should have model misspecification, since the ground truth mechanism may not have additive noise structure. We would once again like to express our gratitude to the reviewers for their valuable feedback and hope that these clarifications have effectively addressed your concerns. [1] Geffner, Tomas, et al. "Deep end-to-end causal inference." arXiv preprint arXiv:2202.02195 (2022). --- Rebuttal Comment 1.1: Title: Reviewer zwvL: Have the authors addressed your concerns? Comment: Dear Reviewer zwvL, Have the authors addressed the potential weaknesses raised in your review and your questions? Does their rebuttal change your assessment of the paper?
Summary: The paper proposes a Bayesian causal discovery method based on a novel parametrization of the binary DAG space and SG-MCMC. The proposed method does not rely on DAG regularization nor restricted to linear models, overcoming the limitations of prior approaches. Experimental results demonstrate the competitive performance of the proposed method compared to existing approaches. Strengths: The paper is well-written and easy to understand. It is easy to follow the core idea of the proposed method. The proposed method is sound and well-motivated (i.e., there are apparent limitations of previous work but this method overcomes such issues.) Several techniques are employed smoothly to propose the method. Experimental results demonstrate the effectiveness and scalability of the proposed method. Weaknesses: For the empirical evaluation, comparison with MCMC approach [1] is missing. Also, AUROC is not reported, which is widely used for the evaluation of uncertainty quantification in Bayesian causal discovery literature. The proposed method is claimed to be scalable and computational complexity is analyzed, but the actual computation cost (e.g., wall clock time) is not compared with DiBS. Computation resources they used are also not provided (e.g., CPU, GPU). Similarly, “per node degree 2” seems very limiting which results in a very sparse graph for large d. Large d will certainly affects the performance of likelihood computation and posterior sampling of W where more edges may demonstrate dependencies. [1] Improving markov chain monte carlo model search for data mining, 2003 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Details of the weight-sharing mechanism (line 186) are missing. While the size of the networks is very small, how does it impact the performance of the proposed method other than reducing the total network parameters? Typo: (Line 230) focusonly Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some of the assumptions can be viewed as limitations but I am fine with those (they are crucial to yields the proposed methods) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback for our paper. We will address the concerns in the following: 1. **Missing MCMC baselines**: We acknowledge the importance of a comprehensive analysis. Note that all MCMC methods are only defined on linear models, usually where the parameters can be marginalized out. They are also not scalable, limiting the settings which we can compare it with. There has been extensive research showing their lack of scalability, mixing and convergence speed (see related work). In addition, it has been shown that DIBS [1] outperforms all MCMC methods (refer to Figure 2, 3, 4, and Table 1 in [1]). The superior performance of our method, as compared to DIBS, led us to believe that MCMC as a baseline might not be the best choice. However, given your comment, we will include a comparison to an MCMC baseline for the 5 variable setting for camera-ready, even though we believe adding it will not change our conclusion. 2. **Missing AUROC metric**: Our primary goal was to select a diverse set of metrics that provide a comprehensive assessment of the performance, capturing different aspects of the model. While AUROC is a metric for evaluating graph quality, we believe that its utility in our case is somewhat limited, as it shares similarities with the SHD and F1 score. These latter metrics, which are incorporated into our evaluation, also consider the inferred graph's quality, arguably making the inclusion of AUROC somewhat redundant for our purposes. Apart from the quality of graph posterior, posterior over function parameters is also crucial for Bayesian causal discovery. This aspect is not captured by the AUROC metric. To account for this crucial facet, we opted for the held-out likelihood as an evaluation metric. This choice is grounded in its widespread use in the literature as a reliable indicator of Bayesian inference quality [2,3]. Meanwhile, we can include the AUROC in the camera-read version. 3. **Computational efficiency**: We have included the wall-clock comparison and generalization to $d>100$ datasets in the PDF. It is important to emphasize that our approach is capable of handling significantly larger dataset dimensions, scaling up to $d>100$ cases, and achieve faster convergence compared to BCD and Dibs in terms of wall-clock time under all dimensionalities. On the other hand, DIBS is limited to $d\leq50$ with a single GPU, and BCD is limited to $d<150$.Moreover, our paper includes an in-depth computational complexity analysis (refer to lines 232-241). This analysis further strengthens the claim that our approach exhibits a competitive edge over DIBS. 4. **Per node degree of 2**: Our choice is motivated by its prevalence in the literature, particularly when it comes to synthetic ER and SF datasets [4]. Notably, DIBS [1] also employed this choice for their experiments. By adopting this, we ensure that our experiments are consistent with established benchmarks, thereby facilitating a more meaningful comparison between our method and baselines. 5. **Weight-sharing mechanism and size of networks**: Sorry for the ambiguity. By formulation of Eq.10, each node requires two separate neural networks. Thus, in total, it requires $2d$ number of different networks to train, incurring high computational cost. To avoid that, instead, we have a separate trainable embedding $\vu_i$ for each node. Therefore, we only need two neural networks with this trainable embedding to differentiable different nodes. This is equivalent to sharing the weights across the $2d$ neural networks [5]. also adopted the same sharing mechanism. We will add a clearer explanation in the revised paper. The reason we choose this network size is that it is already a common choice in the literature [1,5]. In fact, Dibs [1] used even smaller network sizes (2 hidden layers with 5 units). Our method can be easily extended to larger network sizes and it shouldn't impact the performance much, since SG-MCMC is designed to accommodate the large network and dataset sizes [6]. We would once again like to express our gratitude to the reviewers for their valuable feedback and hope that these clarifications have effectively addressed your concerns. [1] Lorch, Lars, et al. "Dibs: Differentiable bayesian structure learning." Advances in Neural Information Processing Systems 34 (2021): 24111-24123. [2] Gong, Wenbo, Yingzhen Li, and José Miguel Hernández-Lobato. "Meta-learning for stochastic gradient MCMC." arXiv preprint arXiv:1806.04522 (2018). [3] Lorch, Lars, et al. "Amortized inference for causal structure learning." Advances in Neural Information Processing Systems 35 (2022): 13104-13118. [4] Zheng, Xun, et al. "Dags with no tears: Continuous optimization for structure learning." Advances in neural information processing systems 31 (2018). [5] Geffner, Tomas, et al. "Deep end-to-end causal inference." arXiv preprint arXiv:2202.02195 (2022). [6] Chen, Changyou, et al. "Bridging the gap between stochastic gradient MCMC and stochastic optimization." Artificial Intelligence and Statistics. PMLR, 2016. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors answers perfectly cleared up almost all of my concerns. Regarding 4, "per node degree of 2", it is understandable since prior work also did the same. However, setting degree to 3 is not a difficult request, and it is good to see how performance drops as we increase the degree. Regardless, I would like to keep my score ("accept"). --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response and confirming your positive score. We are happy to hear that our response cleared up your concerns. Regarding experiments with per node degree 3, given the limited time and resources for the rebuttal, we prioritized scaling experiments (see rebuttal pdf) instead of per node degree 3, as we believe the experiments and conclusions would be very similar, if not the same, as compared to the experiments with per node degree 2. In order to further address your issue and as a response to R7H9, we have provided the source code which can easily handle running for per node degree 3. We will open source this code on acceptance. Also note that the real world/ semi-synthetic datasets contain graphs which are not necessarily per node degree 2, where our approach performs well. We are happy to answer any further questions you may have.
Summary: The paper proposes a novel Bayesian causal discovery (BCD) method that infer the posterior distribution $p(G|\mathcal{D})$ by projecting the DAG $G$ into an equivalent search space. Instead of sampling $G$, the method constructs the posterior distribution by sampling a binary matrix $W$ and potential vector $p$ with via MCMC sampling and variational inference. The Bayesian causal discovery method can scale up to 100 variables and achieves better accuracy on large datasets. Strengths: - The idea of employing the projection framework from DAG-Nocurl paper is interesting. Especially, the difficulties of the sampling based posterior distribution estimation for DAG learning methods lie in the order of parents sampling. The potential function p automatically reserves the causal order. - The proposed method is a combination of sampling-based method and variational inference method. Compared to the state-of-art BCD methods that adopt VI, the proposed method achieves better SHD, especially on high-dimensional data. Weaknesses: - (**Major**) The experiments are not comprehensive. There is a trade-off between efficiency and accuracy compared sampling-based approach to VI approach. Compared to the existing VI-based BCD methods, It is possible that the proposed approaches are more accurate but also suffer from low efficiency. Please refer to the question section for details. - (**Minor**) The tuning of hyperparameters such as scale of p and theta. Since the original framework of DAG-Nocurl is derived for continuous parameterization, the algorithm requires the tuning of additional hyperparameters, which increases the training difficulty. But I understand this is a minor concern. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My question focus on the experiment section: - It seems that the BayesDAG achieves better accuracy on cases $d>=70$. I wonder if authors can also show the efficiency (as in runtime) of different methods? I suspect that BayesDAG would take longer to converge than the VI-based approaches due to the Gibbs sampling procedure. The strength of BayesDAG is better demonstrated if it can achieve much better accuracy with a slight compromise on the efficiency. - Some synthetic data is generated based on the unidentifiable linear SEM with non-equal variances. How does the identifiability of the employed SEM affect the BayesDAG method (How does the non-equal variances assumption influence the Eq. (11))? - The paper only show the empirical results on $d\leq 100$ variables. I am wondering if authors can show BayesDAG can scale up to data with higher dimensions. The gradient-based causal discovery methods such as GraN-DAG can also scale up to 100 variables and finish within reasonable runtime with the help of GPU. - It would be better if the authors can compare to traditional scalable causal discovery methods such as FGES? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The idea of the paper seems interesting and the theory is sound. I concern that the Gibbs sampling procedure in the propose algorithm may compromise its efficiency and scalability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback for our paper. We will try to address the raised concern in the following. 1. **Major concern**: We have included a wall-clock time comparison in the supplementary PDF. From the comparison, we can see our approach, compared to Dibs and BCD, converges faster while obtaining better performance in most cases. DDS, though converges faster, performs significantly worse than our approach. In addition, it is important to note that the BCD net is a **linear method**, while our approach is **fully non-linear**. However, this simplicity in model assumption may be too restrictive for certain problems, limiting the potential for accurate causal discovery. In contrast, our method not only demonstrates performance advantages for dimensions $d > 70$, but also outperforms competing methods, including BCD net, for $d = 30$ and $d = 50$, along with the faster convergence speed. This can be clearly observed in Figure 3 of our paper. These results serve as strong evidence for the benefits of adopting a non-linear model in Bayesian causal discovery. 2. **Minor concern**: In response to the reviewer's concerns about hyper-parameter tuning, we have provided a comprehensive list of hyper-parameters used in our experiments within Appendix D. To address the challenge of tuning additional hyperparameters, we have conducted ablation studies presented in Appendix E.3. These studies examine the sensitivity of the initialized $\pmb{p}$, the number of MCMC chains used, and the sampler noise scale for $\pmb{p}$ and $\pmb{\Theta}$. These ablation studies offer valuable guidelines for tuning the additional hyperparameters. To further assist the readers, we will include a concise paragraph in the revised paper, explaining how to select these hyperparameters. 3. **Identifiability**: It is important to emphasize that identifiability is an inherent property of the model (and not that of the inference/estimation method, which is what BayesDAG is about), which we use additive noise model to ensure this. With small amount of data, the prior term can affect the search procedure, but it will not affect the identifiability since our prior will put non-zero probability on any DAG and function parameters. On the other hand, with enough data (e.g.~infinite data), the prior term can be negligible, and will also not affect our search procedure (see Theorem 1 in [1]). In the case of experiment 6.1.1, the ground truth data generative mechanism is not identifiable, and only identifiable up to Markov equivalence class; however, this does not affect our inference procedure in any sense. One advantage of our method is that it can recover multiple graphs that fit the data equally well, which can be used to test the model's uncertainty estimation quality. 4. **Higher dimensional problem**: As requested, we have included the performance in even higher dimensions in the supplementary material (with 40GB A100 GPU). We are the only method that is capable of generalising to $d>100$ among the baselines apart from BGES, which is a quasi-Bayesian approach. BCD and Dibs are not scalable enough to run under this high dimensionality, which demonstrates our method is more memory-efficient compared to most of them. Additionally, we would like to clarify that a direct comparison between our method and GraN-DAG may not be entirely appropriate for the following reasons: (i) GraN-DAG is not a Bayesian causal discovery method; it focuses on inferring a single DAG, whereas our method is designed to infer a distribution over DAGs; (ii) GraN-DAG is based on continuous-relaxation, which may result in graphs that do not strictly adhere to the DAG structure. In contrast, one advantage of our method is to infer DAGs directly. 5. **FGES**: Regarding traditional causal discovery baselines, we have indeed included a comparison with the bootstrap GES method in our paper. Please refer to Figure 2, Figure 3, and Tables 1 and 2 for detailed comparisons. It is important to note that Fast GES is a variant of GES, but it is not a Bayesian causal discovery method, and only infer a single graph. We would once again like to express our gratitude to the reviewers for their valuable feedback and hope that these clarifications have effectively addressed your concerns. [1] Geffner, Tomas, et al. "Deep end-to-end causal inference." arXiv preprint arXiv:2202.02195 (2022). --- Rebuttal Comment 1.1: Title: Reviewer hANm: Have the authors addressed your concerns regarding the experiments? Comment: Dear Reviewer hANm, The authors have replied to your concerns regarding the experiments and also added new results in their global reply's pdf. Could you please have a look and let us know if this changes your original assessment? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks a lot for your review. We have provided detailed scaling analysis and walltime results as per your concerns. If there are any outstanding questions, we are happy to answer them as well. If there are no further questions and our response addressed your concerns, we would be happy if you could consider increasing your score. --- Rebuttal Comment 1.2: Comment: Thanks the reviewers for the responses. I believe the reviewers have addressed my concerns. Overall, I believe this paper is interesting and I would like to keep my score as borderline accept.
Rebuttal 1: Rebuttal: We would like to express our sincere appreciation to all reviewers for their valuable time and efforts in providing constructive feedbacks for our paper. We are delighted to hear that the reviewers generally find our work interesting (hANm, R7H9), sound and well-motivated (3g7c, zwvL), well-written (3g7c, zwvL, R7H9), valuable and is likely to inspire future work (zwvL). We have taken into consideration the common concerns raised, specifically regarding computational efficiency and scalability. In response, we have included a comprehensive comparison of wall-clock times and performance results for dimensions $d > 100$ in the supplementary PDF. In summary, we would like to emphasize that BayesDAG, as a nonlinear Bayesian approach, demonstrates better computational efficiency in comparison to existing state-of-the-art Bayesian causal discovery algorithms, such as Dibs and BCD nets. It is also worth noting that algorithms like Dibs and DDS are unable to run for dimensions $d > 70$ within a single GPU setup, and we are the only one that is capable of generalising to $d>100$ cases (with a single 40GB A100 GPU). These additional experiments effectively demonstrate the computational advantages of our approach compared to the baselines. In applications like gene regulatory network inference, approaches that can deal with several hundred variables are required, and our contribution takes a positive step in this direction. Just like NoTears, our work, based on NoCurl, and our theoretical contributions lay ground work for scalable causal discovery approaches in more complex settings, for example with hidden variables. We also shared our code as requested by R7H9. Pdf: /pdf/0e099e7759ff0fab795e307707a728c12b686a51.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Maximization of Average Precision for Deep Learning with Adversarial Ranking Robustness
Accept (spotlight)
Summary: This paper studies the average precision issue in adversarial training. As attacking a single image may not affect the final accuracy, the average precision could be largely decreased. As a result, such a phenomenon is demonstrated to be harmful to applying adversarial training. To encourage AP robustness, a novel method is proposed by combining adversarial training and AP maximization. Additionally, by adding point-wise regularization, different variants are proposed. Through empirical analysis of many well-known datasets, the authors carefully validate the effectiveness of the proposed methods. Strengths: - This paper is well-written and can be easily understood. - The effectiveness is great on many datasets. - Detailed analysis of many variants of AdAP is provided. Weaknesses: - The major concern is the motivation of this paper. Why average precision is important in adversarial training is not sufficiently addressed. It seems like the research problem is ad hoc such that the proposed method could directly combine adversarial training and maximum precision. In the real world, I don’t think attacking a single example is worth investigating, and the described situation exists ubiquitously. Moreover, it is possible that in many realistic scenarios, the difference between the average accuracy and average precision might not be very large. Please justify. - I am not sure why two regularizations AdAP_MM and AdAP_PZ should be proposed together when the first one does not have a significant advantage compared to the last one. - The proposed method is limited to binary and imbalanced classification settings, which are strict on the problem setting. The performance of multi-class or balanced adversarial training is still questionable. - How does the hyper-parameters $\lambda$, $\gamma_1$ and $\gamma_2$ are decided? Are they sensitive to different values? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed in the paper. No potential negative societal impact is found. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Below we would like to address your concerns. **Q1:** About the motivation of this paper. **A:** First, we'd like to clarify that average precision is important, especially in scenarios with highly imbalanced datasets, e.g. medical diagnosis, molecular property prediction (e.g. MIT AICures challenge, Open Graph Benchmark) and object detection, where there could be thousands of negative samples but only few positive samples. Suppose we are required to diagnose a rare lethal disease that only 10 over 10,000 patients suffer from. A naive model may infer all patients as negative to reach 99.9% accuracy. However, AP serves as a ranking metric that is particularly attuned to errors at the top of the ranking list, which makes it a more appropriate metric for reflecting models' performance with highly imbalanced datasets. Such a naive model achieves an AP score of 0, indicating it learned nothing. Second, in this paper, we are not limited to solving the problem that adversary is only attacking a single example. The reason why we use only one attacked sample in the introduction is to try to illustrate the importance of average precision with a simple example. In fact, throughout the paper, we solve the problems where all the input samples are attacked. In addition, we agree that in some realistic scenarios, the difference between the average accuracy and average precision might not be very large, particularly when the dataset is balanced. But it is important to note that such scenarios with balanced datasets are not the primary focus of this paper. **Q2:** I am not sure why two regularizations AdAP_MM and AdAP_PZ should be proposed together when the first one does not have a significant advantage compared to the last one. **A:** AdAP_MM and AdAP_PZ represent two straightforward adversarial AP maximization baselines by directly extending the ideas from \[Ref1\] and TRADES. We introduce them as baselines to contrast with our proposed AdAP_LN and AdAP_LPN methods since we are trying to compare our proposed method with other related adversarial AP maximization ideas to show the superiority. AdAP_LPN and AdAP_LN are proposed to show that (i) the proposed listwise regularization is key to improve the performance; (ii) neither of them is dominating the other meaning that the traditional pointwise regularization can help sometimes. Ref1: Madry, Aleksander, et al. Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR 2018. **Q3:** The proposed method is limited to binary and imbalanced classification settings, which are strict on the problem setting. The performance of multi-class or balanced adversarial training is still questionable. **A:** There have been tons of paper proposing solutions for multi-class or balanced setting. However, the adversarial training method for imbalanced data is still under-explored. Hence, as mentioned in the introduction part, this paper focuses on solving imbalanced adversarial training problem which is meaningful and important and in which accuracy-based adversarial training methods are not sufficient. This should not be considered as a limitation but rather as a strength. **Q4:** How does the hyper-parameters $\lambda,\gamma_1, \gamma_2$, and are decided? Are they sensitive to different values? **A:** From Section 5.3, we can observe that $\lambda$ is a sensitive parameter that balances the trade-off between robustness and clean AP performance. This is expected. Hence, it's necessary to tune it carefully to achieve desired performance. The $\gamma_1, \gamma_2$ are tuned in $\\{0.1, 0.9\\}$ as mentioned in the paper. We have added some experiments on CIFAR10 and BDD100K datasets to show the sensitivity of these parameters below. | CIFAR10 (average) | $\gamma_2=0.1$ | $\gamma_2=0.3$ | $\gamma_2=0.5$ | $\gamma_2=0.7$ | $\gamma_2=0.9$ | |-------------------|-------------------------|-------------------------|----------------|----------------|----------------| | $\gamma_1=0.1$ | 0.2719(0.0116) | 0.273(0.0079) | 0.275(0.0105) | 0.2756(0.0135) | 0.2753(0.0114) | | $\gamma_1=0.3$ | 0.2701(0.012) | 0.2696(0.01) | 0.2753(0.0128) | 0.2733(0.0101) | 0.2745(0.0107) | | $\gamma_1=0.5$ | 0.2674(0.0088) | 0.2741(0.011) | 0.2688(0.0108) | 0.2685(0.0135) | 0.2741(0.0113) | | $\gamma_1=0.7$ | 0.2649(0.0103) | 0.2713(0.007) | 0.2686(0.0165) | 0.2696(0.0138) | 0.2658(0.0103) | | $\gamma_1=0.9$ | 0.2668(0.0137) | **0.2766(0.0131)** | 0.2668(0.013) | 0.2689(0.0139) | 0.2641(0.0124) | | **BDD100K(rainy)** | $\gamma_2=0.1$ | $\gamma_2=0.3$ | $\gamma_2=0.5$ | $\gamma_2=0.7$ | $\gamma_2=0.9$ | | $\gamma_1=0.1$ | 0.2433(0.0209) | 0.2436(0.0205) | 0.2404(0.0188) | 0.2415(0.0220) | 0.2423(0.0215) | | $\gamma_1=0.3$ | 0.2471(0.0219) | 0.2440(0.0174) | 0.2432(0.0169) | 0.2473(0.0214) | 0.2460(0.0188) | | $\gamma_1=0.5$ | 0.2479(0.0210) | 0.2475(0.0209) | 0.2476(0.0217) | 0.2450(0.0189) | 0.2472(0.0187) | | $\gamma_1=0.7$ | 0.2507(0.0203) | 0.2453(0.0204) | 0.2471(0.0179) | 0.2461(0.0206) | 0.2480(0.0176) | | $\gamma_1=0.9$ | **0.2522(0.0208)** | 0.2479(0.0200) | 0.2485(0.0201) | 0.2447(0.0193) | 0.2478(0.0193) | --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the rebuttal, I have checked your answer which addresses most of my concerns. I decided to raise my score to 5.
Summary: The paper focuses on adversarial training in terms of Average Precision (AP), which is guided by three design principles: trade-off between AP and robustness, robustness in terms of AP instead of accuracy, and consistency of attacks. By utilizing the techniques of stochastic compositional optimization, the paper proposes a series of adversarial training algorithms to handle the inter-dependent perturbations. Strengths: 1. Novelty: To the best of our knowledge, it is the first work to consider adversarial training of AP. It is a non-trivial extension due to the non-decomposable formulation of AP. 2. Significance: As a widely-used ranking metric, the robustness of AP is significant to the machine learning community. Besides, the design principles and techniques might be instructive to the robustness of other ranking metrics. 3. Clarity: The paper is overall well-written with clear notations. 4. Soundness: The effectiveness of the proposed method is well-supported by experiments under various settings. Weaknesses: 1. The authors solve a non-zero-sum game to ensure consistency. However, unlike previous work on adversarial training, the equilibrium state of this game is unknown and requires more discussion. 2. Fig. 2 provides a visualization of the trade-off between robustness and AP, but how the hyperparameter $\lambda$ affects the trade-off is unclear. Ideally, it should present a positive correlation. 3. The related work could be further improved by discussing the latest literature on AP stochastic optimization such as [1,2]. Ref: [1] Wang et. al. Momentum accelerates the convergence of stochastic auprc maximization. ICML, 2022. [2] Wen et. al. Exploring the algorithm-dependent generalization of auprc optimization with list stability. NeurIPS, 2022. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses part for the major concerns. Other minor issues are as follows: 1. The design of $R$ in Eq. (8) requires a detailed explanation: AP involves all examples, while the top-one probability focuses on the top-one examples. Could we apply ranking-based functions instead? 2. The proposed algorithms share similar properties with [19] and [32]. Is it possible to provide a corresponding convergence analysis based on these works? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first address what the reviewer mentioned as weaknesses and then respond to the reviewer's questions: **Q1:** The authors solve a non-zero-sum game to ensure consistency. However, unlike previous work on adversarial training, the equilibrium state of this game is unknown and requires more discussion. **A:** We appreciate your comments. We acknowledge that this is a separate theoretical issue. Even for zero-sum game approaches, the non-convexity nature of the problem may render the Nash equilibrium non-existent \[Ref1\]. We hope our research will motivate more theoretical researchers to study this problem. Ref1: Jin et al. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization. ICML, 2020. **Q2:** Fig. 2 provides a visualization of the trade-off between robustness and AP, but how the hyperparameter affects the trade-off is unclear. Ideally, it should present a positive correlation. **A:** As the curves shown in Fig.2, it does presents a positive correlation between robust AP and $\lambda$ for relatively small values of $\lambda$. Nevertheless, as $\lambda$ becomes excessively large, this correlation diminishes. This reasoning is logical to us since we might expect a meaningless model if $\lambda$ goes to infinity. **Q3:** The related work could be further improved by discussing the latest literature on AP stochastic optimization such as \[1,2\]. **A:** Thank you for your suggestion. we will incorporate discussion of the suggested literature to enhance the related work section in the revised paper. **Q4:** The design of in Eq. (8) requires a detailed explanation: AP involves all examples, while the top-one probability focuses on the top-one examples. Could we apply ranking-based functions instead? **A:** Indeed, the regularization based on the top-one probability $p(x_i) = \frac{\exp(h_w(x_i))}{\sum_{j=1}^n\exp(h_w(x_j))}$ is a ranking-based function. Please note that the regularization sums over all data instead of just one example. In addition, the denominator of the top-one probability also includes the all example, similar to AP. A similar loss was originally proposed in \[Ref2\] known as ListNet for learning to rank. It encourages elevating positive samples to higher positions in the list as opposed to negative Ref2: Cao et al. Learning to rank: from pairwise approach to listwise approach. ICML 2007. **Q5:** The proposed algorithms share similar properties with \[19\] and \[32\]. Is it possible to provide a corresponding convergence analysis based on these works? **A:** Please note that our problem is much more challenging than that in \[19, 32\]. We can also view the problem as a bilevel optimization problem. However, the lower-level problem itself is non-convex. Almost all existing convergence analysis for bilevel optimization assume convexity for the lower-level problem. Deriving the convergence of our algorithm and any other algorithms would be a significant work by itself. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. While some concerns have been addressed, the following issues are still unclear: **Q2**: Fig. 2 only presents a positive correlation between **robust AP and clean AP** instead of **robust AP and $\lambda$**. Please plot "robust AP v.s. $\lambda$" and "clean AP v.s. $\lambda$" respectively to support the conclusions. **Q4**: Although both the top-one probability and AP involve all examples, it will be better if more theoretical derivations are provided. In fact, most ranking losses involve all examples such as NDCG. Compared with other ranking functions, the advantages of the top-one probability are unclear. **Q5**: The theoretical convergence analysis is indeed challenging. However, it is necessary to provide an empirical analysis since the paper proposes a new optimization algorithm. --- Reply to Comment 1.1.1: Comment: Thank you for the prompt feedback and we hope to address the remaining issues. **Q2:** Fig. 2 only presents a positive correlation between robust AP and clean AP instead of robust AP and $\lambda$ . Please plot \"robust AP v.s. $\lambda$ \" and \"clean AP v.s. $\lambda$ \" respectively to support the conclusions. **A:** For illustration purposes, we provide the relationship between robust AP/clean AP and $\lambda$ on CIFAR10_cls1, CIFAR100_cls2, BDD100K(cloudy), CelebA(gray_hair), corresponding to Fig.2 shown in the paper. The results are summarized in the table below since the system doesn't support image uploading. We report the results in the format of robust AP/clean AP, with the values representing the average of three runs with different seeds. From the table, we can observe that there is a positive correlation between robust AP and $\lambda$ for relatively small values of $\lambda$ and this correlation diminishes when $\lambda$ continues increasing. Moreover, a negative correlation between clean AP and $\lambda$ is consistently observed, in accordance with our expectations. We will include the results in the revision. |$ \lambda $ | 0.01 | 0.4 | 0.8 | 1 | 4 | 8 | 10 | |------------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | CIFAR10 | (cls\_1) | AdAP\_LN | 0.1153/0.9525 | 0.4264/0.9295 | 0.4590/0.9190 | 0.4464/0.9099 | 0.3181/0.8649 | 0.2695/0.8385 | 0.2522/0.8356 | | AdAP\_LPN | 0.1504/0.9358 | 0.5185/0.9314 | 0.5263/0.9146 | 0.5229/0.9067 | 0.4818/0.8468 | 0.4517/0.8102 | 0.4411/0.7968 | | CIFAR100 | (cls\_2) | AdAP\_LN | 0.0811/0.8752 | 0.2782/0.7984 | 0.3184/0.7668 | 0.3238/0.7540 | 0.3644/0.6736 | 0.3437/0.6286 | 0.3436/0.6164 | | AdAP\_LPN | 0.1604/0.8332 | 0.3475/0.7896 | 0.3809/0.7558 | 0.3859/0.7441 | 0.3724/0.6470 | 0.3501/0.6011 | 0.3392/0.5827 | | BDD100K | (cloudy) | AdAP\_LN | 0.0621/0.6504 | 0.2221/0.5986 | 0.2742/0.5632 | 0.2815/0.5577 | 0.2796/0.4935 | 0.2347/0.4937 | 0.2103/0.4861 | | AdAP\_LPN | 0.0411/0.6570 | 0.2455/0.5985 | 0.2682/0.5681 | 0.2726/0.5600 | 0.2725/0.4886 | 0.2464/0.4366 | 0.2473/0.4295 | | CelebA| (gray\_hair) | AdAP\_LN | 0.0264/0.7037 | 0.1831/0.6282 | 0.2449/0.6061 | 0.2696/0.6010 | 0.2859/0.5221 | 0.2714/0.4662 | 0.2601/0.4506 | | AdAP\_LPN | 0.0284/0.7123 | 0.2280/0.6259 | 0.2555/0.5907 | 0.2644/0.5804 | 0.2799/0.4948 | 0.2676/0.4515 | 0.2676/0.4515 | **Q4:** Although both the top-one probability and AP involve all examples, it will be better if more theoretical derivations are provided. In fact, most ranking losses involve all examples such as NDCG. Compared with other ranking functions, the advantages of the top-one probability are unclear. **A:** (1) NDCG is not appropriate for defining the regularization term. Please note that our regularization needs to measure the divergence between **real-valued ranking scores** of two sets of data, i.e., that of the clean data and the perturbed data. NDCG measures the consistency between **the real-valued ranking scores and ground-truth discrete relevance scores**, making it not appropriate for our purpose. (2) The top-one probabilities springing from learning to ranking literature provides a natural way for measuring the divergence between two probability distributions, making it able to characterize the boundary error as derived in Theorem 1. (3) Other ranking correlation measures (e.g., Spearman's $\rho$, Kendall's $\tau$) are possible to measure the difference between two ranking results. Nevertheless, the advantage of using top-one probabilities is that it does not involve all pairs of data and its optimization is much more efficient. **Q5:** The theoretical convergence analysis is indeed challenging. However, it is necessary to provide an empirical analysis since the paper proposes a new optimization algorithm. **A:** Thank you for your valuable suggestions. We will provide some empirical results showing the convergence of our algorithms in the revision. Below we show some convergence results of our proposed AdAP_LN algorithm. Specifically, we set $\lambda=1, \gamma_1=0.1, \gamma_2=0.1$ and run AdAP_LN algorithm on CIFAR10 dataset and BDD100K dataset for a total of 120 epochs and 80 epochs, respectively. We evaluate the training loss after each epoch and report the loss values below, as image uploading is not supported by the system. We present the AP loss (i.e., $P(w)$ in Equation 9) and regularization term ( i.e., $R(w,\delta,D)$ in Equation 9) separately, as well as the summation of the two losses. For each experiment, we repeat three times with different random seeds, then report average loss values. The results demonstrate the convergence of our algorithm. We will include plots of convergence curves in the revision.
Summary: This paper considers the adversarial robustness of the AP metric, which is an important measure of deep learning under some imbalanced applications. To do this, the authors develop a novel formulation that combines an AP surrogate loss with a regularization term toward adversarial ranking robustness, maintaining the consistency between the ranking of clean data and that of perturbed data. Empirical studies demonstrate the effectiveness of the proposed methods. Strengths: To the best of our knowledge, this is the first work to consider the AP-based adversarial robustness problem, which will bring some new insights to the adversarial robustness community. The contributions of this paper are novel and the theoretical results are technically sound. The empirical results are also promising. Weaknesses: However, some essential issues should be fixed: 1. During the evaluation, this paper merely considers the simple FGSM-based attack manner, which is insufficient to support the effectiveness of the proposed method. Some stronger attacks, such as PGD-based and AutoAttack [1], should be considered. 2. Another minor question is how AP-based AT impacts the performance of accuracy-based AT. Can the proposed methods improve AdAP without sacrificing overall accuracy? Because merely considering AP while overlooking accuracy may be meaningless. 3. Why do we need to develop AdAP? What are the differences between AdAP and AdAUC in the ranking performance? Please give me some intuitive examples like Fig.1 4. Finally, some latest advanced AP optimization methods are missed, such as [2]. Ref: [1] Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks. [2] Exploring the algorithm-dependent generalization of AUPRC optimization with list stability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please carefully address all my concerns in the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not include any limitations for their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for dedicating their time to provide a comprehensive review, and we are committed to addressing the raised issues. **Q1:** Adding some stronger attacks, such as PGD-based and AutoAttack. **A:** We appreciate the reviewer's suggestion. First, we'd like to apologize for the confusion caused by the terminology 'FGSM attack'. In all our experiments, we utilized the iterative FGSM attack method which indeed works as a $l_{\infty}$-bounded PGD attack. Following \[Ref2\], we abused the terminology in Section 5. To provide a more comprehensive assessment, we have evaluated all the models against a strong attack method $Auto-PGD_{CE}$ proposed in \[Ref1\]. The results are summarized in Table 1 in the global response(PDF file). We can observe that $Auto-PGD_{CE}$ exhibits a stronger attack as it leads to the deterioration of performance across all models, compared with the iterative FGSM (standard PGD) method employed in section 5.1. However, we can see that the superiority of our proposed methods still remains evident. Please note that we did not compare with the ensemble AutoAttack since it is tailored for multi-class classification concerning accuracy. For instance, the $APGD_{DLR}$ approach presented in \[Ref1\] requires at least three classes. We focus on AP maximization on imbalanced binary classification data and the multi-class datasets such as CIFAR-10 are converted into multiple one-vs-all binary classification tasks with reported performance averaged over these multiple tasks. Ref1: Croce & Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML 2020. Ref2: Zhang et al. Theoretically principled trade-off between robustness and accuracy. International conference on machine learning. ICML 2019. **Q2:** Another minor question is how AP-based AT impacts the performance of accuracy-based AT. Can the proposed methods improve AdAP without sacrificing overall accuracy? **A:** Please note that in many applications (e.g., learning to rank, imbalanced classification), AP is much more informative than accuracy. For example, in imbalanced classification with 99% data being negative, achieving 99% accuracy by a naive model that predicts everything as negative is meaningless. Hence, the focus of the paper is to report AP. Nevertheless, we have evaluated the performance of accuracy of our models on CIFAR10 dataset. To this end, we need to learn a threshold on the validation data since our model is a ranking-based model. Based on our model and the threshold, we have also evaluated our models in terms of accuracy and compared them with other methods in Table 2 in the global response(PDF file). The results show that (i) our proposed methods improve the AdAP and adversarial accuracy at the same time; (ii) all adversarial training methods' accuracy cluster around 0.9, the ratio of negative samples, offering limited insight into the model's performance. **Q3:** Why do we need to develop AdAP? What are the differences between AdAP and AdAUC in the ranking performance? Please give me some intuitive examples like Fig.1 **A:** In scenarios involving highly imbalanced datasets, AP, as demonstrated in \[Ref1\], offers a more accurate reflection of the model's performance as compared to AUC. For example, we have a dataset that contains 10 positive samples and 10 thousand negative samples(e.g. document retrieval or object detection scenarios). Suppose one model ranks 10 positives higher than all the negatives, then both AP and AUC achieve 1. After perturbation, if the model ranks 2 negatives at the top and keeps the rest lower than all the positives, the AUC score will remain at 0.9998 while the AP score will degrade to 0.8333. This illustrates that, in such case, AP metric could be more informative about performance. Ref1: Davis and Goadrich. The relationship between Precision-Recall and ROC curves. ICML 2006. **Q4:** Adding citations to advanced AP optimization methods, such as \[2\]. **A:** Thank you for your suggestion for improving our paper. we will include and discuss the suggested advanced AP optimization methods in the revision. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: Thank you for your effort. I think most of my concerns have been addressed.
Summary: This paper extends the discussion of adversarial robustness from accuracy to precision, and also extends TRADES solution to this new setting. The paper is fairly standard, with a new problem, a new solution, some minor theoretical studies (obviously also extended from TRADES), and some fairly good empirical results. The paper is highly condensed, so several critical points need further clarification. Strengths: - The paper interestingly studies a new problem of precision in the adversarial setting. - The paper introduces a new algorithm to achieve the listwise regularizations Weaknesses: 1. the empirical method and theoretical discussions are extended from TRADES, which might raise some concerns about the novel contributions of this work. 2. the experiments are only conducted against FGSM attack, this is probably too limited, especially since there are several cases the performances are fairly close. - please also use PGD and autoAttack. 3. the an essential step of the algorithm is the approximation in Section 4.2, it's probably necessary to offer more empirical results on this regard, such as ablation studies with varying batch sizes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. it seems the authors forgot to define AdAP_LPN properly. - the only clue I can find is at line 337, where AdAP_LPN is combining listwise and pointwise adversarial regularization, then in contrast AdAP_LN is probably just listwise regularization. 2. cannot find information about the threat model behind TRADES, and PGD, cannot confirm whether these threat models (models used to generated adversarial attacks) are the same as the AdAP family models or not. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: do not find explicit discussions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and feedback on our paper. In the following, we are committed to addressing the raised concerns and questions. **Q1:** About the novel contributions of this work. **A:** We agree the adversarial regularization is similarly motivated as TRADES, which has been widely used for adversarial training. However, there are some key differences between our work and TRADES in how the regularization and attack are set up. (1) the regularization in TRADES measures the pointwise difference between predictions on clean data and perturbed data; instead the regularization in our method measures the listwise difference between predictions on clean data and perturbed data; (2) the attacks in the training of TRADES are generated in a zero-sum framework to maximize the regularization term; in contrast, the attacks in our training are generated in a non-zero sum framework. These two differences are very important for improving the performance of adversarial AP. In addition, the optimization of the proposed listwise regularization, maintaining the consistency between the ranking of clean data and that of perturbed data, is a non-trivial extension of TRADES. **Q2:** Adding more attacks, e.g., PGD and AutoAttack. **A:** We appreciate the reviewer's suggestion. First, we'd like to apologize for the confusion caused by the terminology 'FGSM attack'. In all our experiments, we utilized the iterative FGSM attack method which indeed works as a $l_{\infty}$-bounded PGD attack. Following \[Ref2\], we abused the terminology in Section 5. To provide a more comprehensive assessment, we have evaluated all the models against a strong attack method $Auto-PGD_{CE}$ proposed in \[Ref1\]. The results are summarized in Table 1 in the global response(PDF file). We can observe that $Auto-PGD_{CE}$ exhibits a stronger attack as it leads to the deterioration of performance across all models, compared with the iterative FGSM (standard PGD) method employed in section 5.1. However, we can see that the superiority of our proposed methods still remains evident. Please note that we did not compare with the ensemble AutoAttack since it is tailored for multi-class classification concerning accuracy. For instance, the $APGD_{DLR}$ approach presented in \[Ref1\] requires at least three classes. We focus on AP maximization on imbalanced binary classification data and the multi-class datasets such as CIFAR-10 are converted into multiple one-vs-all binary classification tasks with reported performance averaged over these multiple tasks. Ref1: Croce & Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML 2020. Ref2: Zhang et al. Theoretically principled trade-off between robustness and accuracy. International conference on machine learning. ICML 2019. **Q3:** Adding ablation studies with varying batch sizes to demonstrate the approximation in Section 4.2. **A:** We appreciate the reviewer's constructive comment. We have conducted some empirical studies to investigate the proposed AdAP_LN's sensitivity to batch sizes on CIFAR10 dataset. The results are included in Figure 1 in the global response (PDF file). The results show that our method does not require a very large batch size to achieve a good performance and is generally not sensitive to batch size. **Q4:** it seems the authors forgot to define AdAP_LPN properly. The only clue I can find is at line 337, where AdAP_LPN is combining listwise and pointwise adversarial regularization, then in contrast AdAP_LN is probably just listwise regularization. **A:** We apologize for the confusion. The AdAP_LPN is defined as Equation 10 in Section 4.1. Please refer to line 194. **Q5:** About the threat model behind TRADES, and PGD, confirm whether these threat models (models used to generated adversarial attacks) are the same as the AdAP family models or not. **A:** As introduced in Sec 5.1 and 5.2, when we evaluate the robustness against white-box attacks, the threat model is the evaluated model itself since white-box attacks have full access to the target model. However, when we evaluate the robustness against black-box attacks, the threat model behind TRADES, PGD, MART and AdAP family models is the same model trained with CE loss minimization on clean data for fair comparison.
Rebuttal 1: Rebuttal: We thank the reviewers for your comments and feedback on our paper. We have included some experimental results in the PDF file, including adversarial robustness against $Auto-PGD_{CE}$ white-box attack, adversarial accuracy against white-box iterative FGSM attack on CIFAR10 dataset, and illustration for insensitivity to batch size of AdAP\_LN. Pdf: /pdf/5493bd5d7354b28c2a7b75472440ffa585d0cbf8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper investigates how to improve the robustness of a model under adversarial attacks while ensuring its Average Precision (AP) on clean data samples. This studied problem can be very important in some application scenarios but has not been extensively explored yet. By integrating the idea of existing adversarial training based methods into AP maximization algorithms, a novel solution is proposed in this paper. Experimental results obtained on multiple datasets with various binary imbalanced settings demonstrate the superiority of the proposed solution in terms of AP and robustness, comparing with baseline methods, which only focusing on optimizing either AP or robustness of models. Strengths: 1. The problem explored in this paper, i.e., enhancing model robustness while maintaining AP, is a practical and important problem in some application scenarios but has not been well studied, as related works only focused on improving either model robustness or model AP. 2. A novel solution is proposed in this paper by integrating existing adversarial training methods with AP maximization algorithms. Experimental results including performance comparison and ablation studies verify the effectiveness of the proposed solution. Weaknesses: 1. To provide a more comprehensive perspective to evaluate the robustness of trained models, stronger attack methods, such as AutoAttack, should be included in experiments. 2. Authors didn't discuss limitations of the proposed solution. Based on the description in the methodology part, the training efficiency of the proposed solution may be a problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the introduction part, authors mentioned one nice property when designing adversarial training methods for AP maximization is "consistency of attacks between training and inference". Can authors add more explanation about this claim? Why attacks are expected to be consistent in training and inference? Generally, to evaluate model robustness more comprehensively, any kinds of attacks can be used in the inference stage. 2. How's the training efficiency of the proposed solution? It seems the proposed solution will take much longer time on training, comparing with adversarial training based methods. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The efficiency of the proposed solution may be a problem. Hence, it would be better if authors can discuss or provide some experimental results to explain the efficiency of the proposed solution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. **Q1:** To provide a more comprehensive perspective to evaluate the robustness of trained models, stronger attack methods, such as AutoAttack, should be included in experiments. **A:** We appreciate the reviewer's suggestion. To provide a more comprehensive assessment, we have evaluated all the models again a strong attack method $Auto-PGD_{CE}$ proposed in \[Ref1\]. The results are summarized in Table 1 in the global response(PDF file). We can observe that $Auto-PGD_{CE}$ exhibits a stronger attack as it leads to the deterioration of performance across all models, compared with the iterative FGSM (standard PGD) method employed in section 5.1. However, we can see that the superiority of our proposed methods still remains evident. Please note that we did not compare with the ensemble AutoAttack since it is tailored for multi-class classification concerning accuracy. For instance, the $APGD_{DLR}$ approach presented in \[Ref1\] requires at least three classes. We focus on AP maximization on imbalanced binary classification data and the multi-class datasets such as CIFAR-10 are converted into multiple one-vs-all binary classification tasks with reported performance averaged over these multiple tasks. Ref1: Croce & Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML 2020. **Q2:** Authors didn't discuss limitations of the proposed solution. Based on the description in the methodology part, the training efficiency of the proposed solution may be a problem. **A:** We thank the reviewer for the constructive comment. We agree with the reviewer that the proposed adversarial training methods are more time-consuming than conventional natural training and some adversarial training methods (e.g. PGD). For detailed runtime analysis, please refer to the response to Q4. We will add a limitation part to discuss this in the revision. **Q3:** Regarding the \"consistency of attacks between training and inference\", can authors add more explanation about this claim? Why attacks are expected to be consistent in training and inference? **A:** We apologize for the confusion. The consistency of attacks between training and inference means that the attack is generated only based on individual data. In the inference stage, the attack is usually imposed on individual data without referring to other data, which is referred to as pointwise attack. Hence, we propose to use the pointwise attack instead of listwise attack in the training stage to maintain such consistency, which is helpful for improving the results. It is important to note that we do not restrict the pointwise attacks used in the training to be the same as the inference. We have added an experiment to demonstrate the effectiveness of our approaches when the inference uses a different pointwise attack ($Auto-PGD_{CE}$ attack). The results are demonstrated in response to Q1. **Q4:** How's the training efficiency of the proposed solution? It seems the proposed solution will take much longer time on training, comparing with adversarial training based methods. **A:** Below, we've included the results of efficiency comparisons for all the models. In the experiment, we set the parameters, which could affect training time, exactly the same(e.g. batch size as 128, total epochs as 60, adversarial samples are generated with 6 projected gradient ascent steps) and run all the models over three times on the Class_0 task of CIFAR10. From the table, we can observe that (i) adversarial training methods are generally more time-consuming than natural training; (ii) our proposed AdAP_LN and AdAP_LPN methods cost a little more time than traditional PGD method but much less time than TRADES. This is because, to generate adversarial samples in training, TRADES is solving the maximization of KL divergence between the probabilities predicted with clean data and perturbed data(i.e. $\max_{\|\delta\|\leq \epsilon}\sum_k h(x)_k\log h(x)_k - h(x)_k\log h(x+\delta)_k$, where $h(x)_k$ and $h(x+\delta)_k$ are predicted probabilities for class k on clean data and perturbed data respectively ) while PGD and our proposed methods are directly solving maximization of Cross Entropy (i.e. $\max\_{ \|\delta\| \leq \epsilon }-\log h(x+\delta)_y$). In adversarial training, since each gradient descent step wrt $w$ requires multiple gradient ascent steps wrt $\delta$, the computational expense primarily stems from the projected gradient ascent steps, which can be also observed by comparing the efficiency of CE Min. with PGD. And TRADES requires two forward propagations and one backpropagation in each projected gradient ascent step, whereas the latter method only needs one forward propagation and one backpropagation. | Methods | Run 1 | Run 2 | Run 3 | Average | |-----------|---------|---------|---------|-----------| | CE Min. | 563s | 566s | 568s | 565.67s | | AP Max. | 589s | 590s | 589s | 589.33s | | PGD | 2833s | 2804s | 2803s | 2813.33s | | TRADES | 4203s | 4182s | 4179s | 4188.00s | | MART | 3192s | 3205s | 3194s | 3197.00s | | AdAP\_LN | 3227s | 3213s | 3211s | 3217.00s | | AdAP\_LPN | 3234s | 3219s | 3218 | 3223.67s | --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I have read authors' reply, which addressed most of my questions and concerns. Considering the overall quality of this work, I decide to keep my original score 7.
null
null
null
null
null
null
VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models
Accept (poster)
Summary: This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. The proposed framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Strengths: 1. Their experimental results not only analyzed DDPM but also score-based models. Besides, they also analyzed other acceleration sampling methods. 2. Their experiments included caption triggers. Weaknesses: 1. To show that no modifications are needed to the sampling process, the article should include details of the sampling process. 2. The order of formulas 8-12 in the article is not clear enough. To describe in the order of forward process → backward process → sampling process may be more clear. 3. Please check some spelling errors in the article. For example, "Praobility" in the title on line 138. 4. Please check whether the last term in Eq. 4 is $L_0(x_1,x_0)$. 5. The article claims that BadDiffusion will fail when the coefficient is $\frac{1}{2}$ (line 55), but there is no further explanation. 6. The article claims that an attacker only needs to obtain the model parameters $\theta_{download}$. However, to execute a backdoor attack, some adjustments need to be made to the initial noise. This is difficult to achieve in reality, and the article needs to emphasize this point. 7. Although the article extends the attack to other models, such as score-based models, there is no essential difference from BadDiffusion[2]. On BadDiffusion, just changing the coefficient of the noise term $1-\bar\alpha_t \mathbf{I}$ in the formula $q\left(\mathbf{x}_{t}^{\prime} \mid \mathbf{x}_{0}^{\prime}\right):=\mathcal{N}\left(\mathbf{x}_{t}^{\prime} ; \sqrt{\bar{\alpha}_{t}} \mathbf{x}_{0}^{\prime}+\left(1-\sqrt{\bar{\alpha}_{t}}\right) \mathbf{r},\left(1-\bar{\alpha}_{t}\right) \mathbf{I}\right)$ (Eq. 6 in their article [2]) to $b_k \mathbf{I}$ can easily obtain the results in this article. In addition, Eq. 11 is only a change of sign of Eq. 38 in [1]. [1] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2020). Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. [2] Chou, S. Y., Chen, P. Y., & Ho, T. Y. (2023). How to backdoor diffusion models?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4015-4024). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors didn’t mention any potential limitations. The authors can list limitations of the application. For example, to apply their method, the attacker needs to access the initial noise, which is not practical sometimes. Besides, the negative societal impact should also be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable suggestions. We will reply to your comment one by one in the following. **[Including More Details of ODE Samplers]** Many thanks for your beneficial suggestions. Our paper uses genuine ODE samplers implemented by the library "diffusers." We will also introduce the samplers we used in the article. **[Reorder the Presentation of Processes]** Thank you for the excellent suggestion. We will introduce the forward process and reorder the formula presentation as your comments. **[Correct the Typos]** Thank you for your thorough review. We will correct the typos in line 138 and equation 4. **[Explain Why BadDiffusion Fails]** Many thanks for your meaningful and valuable question. We will provide further theoretical and empirical explanations. Firstly, in a theoretical view, we can derive a backdoor reversed transitional probability $q(x_{t-1}' | x_{t}')$ given a forward transitional probability $q(x_{t}' | x_{t-1}')$. Thus, the backdoor reversed transitional probability describes a backdoored reversed diffusion process from T to 0. We can convert backdoor reversed transitional probability into an SDE (equation 18 in appendix) with Taylor approximation as shown in appendix B.3.1. Also, to extend the results to ODE, we can introduce an additional parameter $\zeta$ and use the Fokker-Planck equation shown in appendix lemma 1. Finally, we can obtain equation 10 as a general form of a backdoor reversed process. On the other hand, different samplers might simulate the process deterministically (ODE) or stochastically (SDE), as in equation 11. Comparing equations 10 and 11, we found that the loss function of BadDiffusion can be derived with $\zeta = 1$, which means BadDiffusion is just a special SDE case of VillanDiffusion. Empirically, we also conduct an experiment described in the general response 2. We evaluated BadDiffusion on multiple ODE samplers, including DPM-Solver, PNDM, and UniPC, and found it performed badly. Furthermore, in general response 3, we also provide empirical evidence for our theory. We show that the randomness of samplers is the crucial factor that affects the performance of the backdoors because when the randomness of samplers drops, the MSE of VillanDiffusion trained for ODE goes down, but BadDiffusion goes up. **[The Difference from BadDiffusion]** Thank you for your valuable thoughts. We would like to emphasize that our main contribution is to provide a unified framework to explore advanced backdoor attacks on various diffusion models, especially for those that have not been studied, such as score-based models, DDIM, DEIS, and DPM-Solver. Firstly, with flexible correction schedulers obtained from VillanDiffusion, researchers can explore backdoor attacks based on their own configuration of the diffusion models. Secondly, with our unified view of continuous and discrete diffusion processes, researchers can incorporate the concepts of ODE and SDE to analyze the effectiveness of backdoor attacks under the same framework. For example, recent works have investigated the effect of the self-consistency property of ODE on diffusion models. Some works enhance the self-consistency of the sampling trajectory to improve the sample quality in one inference step. With our framework, researchers can also investigate the impact of the self-consistency of ODE on the backdoor to discover more advanced attacks. Thus, our method can facilitate more advanced backdoor attacks on diffusion models. **[Adjusting the Initial Noise is not Feasible]** We thank the reviewer for the suggestion. We would like to clarify that the attacker actually does not need to access the initial noise. The main idea of backdooring diffusion models is modifying the mean of diffusion processes. Thus, we only need to add a specific patch to the noise during training. At inference time, the attacker only needs to attach the trigger to the data input. Furthermore, we also take the inpainting task as a practical example in Appendix C6. Inpainting is a common application for diffusion models. We found that by inserting a trigger into the corrupted images, the diffusion models can produce target images easily. We also use LPIPS to evaluate the quality of recovered images and MSE to measure the backdoor success rate and found that our method can achieve both high utility and specificity. **[Potential Limitation]** I want to express my gratitude for your valuable recommendations. To address your comment on discussing the limitations, we will add a discussion on the limitation that although VillanDiffusion is a general framework that covers many existing configurations of diffusion models and sampling schemes, it is possible that VillanDiffusion cannot be applied in cases where the framework does not hold. **[Negative Societal Impact]** Your valuable guidance is warmly acknowledged. On negative societal impact, we will add that "Although cast as a general backdoor attack framework, we position our work as a red-teaming tool to explore and unveil hidden risks in diffusion models. We believe our framework can help accelerate the development of robust diffusion models." --- Rebuttal Comment 1.1: Title: Reply to the rebuttal Comment: Thank you for the kind response. Most of the concerns were addressed and I decided to increase my score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer zhWX for your reply and for increasing our rating! Again, we thank the reviewer for the valuable comments and suggestions.
Summary: This paper proposed a backdoor attack framework called VillanDiffusion, which extends the existing backdoor analysis capabilities for deep models (DMs). By encompassing both unconditional and conditional DMs, including denoising-based and score-based models, as well as incorporating training-free samplers, the proposed framework enables holistic evaluations of backdoor attacks. Experiments demonstrate that VillanDiffusion not only facilitates the analysis of diverse DM configurations but also offers valuable insights into caption-based backdoor attacks on DMs. Strengths: The paper is written in a clear and understandable manner, making it accessible to a wide range of readers. The authors effectively convey their ideas and concepts, ensuring that the content is comprehensible. Besides, the authors provide a thorough analysis of backdoor attacks on deep models and propose a unified approach, VillanDiffusion. Weaknesses: There are concerns regarding the effectiveness of the VillanDiffusion framework The framework proves to be effective when the poison ratio reaches up to 20%, which is significantly higher than what is typically observed in other tasks. This discrepancy raises doubts about the practicality of the framework, emphasizing the need for further investigation and evaluation across various tasks and poison ratios to ensure its broader applicability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the above weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. Here is our comment on the weakness. **[The Effective Poison Rate is Too High]** Thank you for your insightful comments. Firstly, in our threat model, the attackers would release the backdoor DMs to the public. As a result, once the utility is high enough to fool the users, the attack will succeed, no matter how high the poison rate is. The same threat model is also used in standard backdoor attacks BadNet [R1] and BadDiffusion. Secondly, we also train the backdoor DDPM on CelebA-HQ with a lower poison rate and more training epochs. However, due to limited time, we only have enough time to train a 10%-poisoned model with 330 epochs. We also attach the generated target and clean image in the author’s rebuttal Figure 1(c) and 1(d). Also, we can see that the target image emerges clearly in the figure. Note that in comparison to full training epochs: 2500, it’s a significant backdoor effect in a very early training period. Therefore, we might attack successfully with a 10% poison rate, which is much lower than 20%. Furthermore, we also take the inpainting task as a practical example in Appendix C6. Inpainting is a common application for diffusion models. We found that by inserting a trigger into the corrupted images, the diffusion models can generate target images easily. We also use LPIPS to evaluate the quality of recovered images and MSE to measure the backdoor success rate and found that our method can achieve both high utility and specificity. [R1] Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg BadNets: Evaluating Backdooring Attacks on Deep Neural Networks [C] IEEE Access
Summary: This paper proposes a universal backdoor attack framework on diffusion models facing different kinds of content schedulers, different kinds of samplers, and conditional and unconditional tasks. Strengths: 1. This paper proposes a universal backdoor attack framework on diffusion models, which are important. 2. The experiments are sufficient. 3. This paper is well written and technically sound. Weaknesses: 1. From my point of view, backdoor in diffusion models is an end-to-end process, can you explain the main difference from some prior works in diffusion models such as [1]? If the only difference is to test on different diffusion models, the contribution is limited. 2. There is no comparison with the former methods, and thus I cannot find out whether there is improvement in backdoor attack. 3.There are some flaws such as line 191 learns should. [1] Chou S Y, Chen P Y, Ho T Y. How to backdoor diffusion models?[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 4015-4024. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can you explain the main difference from some prior works in diffusion models such as [1]? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for giving me such valuable advice. We will elaborate on the following points. **[Main Difference from BadDiffusion]** Thank you for sharing your valuable thoughts. Firstly, from a theoretical perspective, our work is not just an extension of BadDiffusion but a general framework to deal with various configurations of diffusion models. With our theory in line 198, we can derive the correct backdoor correction terms of diffusion models described within lines 110 ~ 116, which can cover not only DDPM and score-based models but much more than these. To the best of our knowledge, no backdoor attacks can be applied to this wide range of diffusion models. We also provide proof that the loss of BadDiffusion describes a backdoor SDE, which is also a special case of a general backdoor process controlled by a hyperparameter $\zeta$. Our contribution is not only the simple and elegant adaptive correction term derived from VillanDiffusion with the specification of diffusion settings but also the general framework that is beneficial to different configurations of diffusion models, including SDE samplers, ODE samplers, flexible noise schedulers, and conditional generation. Due to the generality of VillanDiffusion, it also serves as a powerful tool for exploring backdoor attacks of future diffusion models. Secondly, from an empirical point of view, we also show the limitations of BadDiffusion in the general responses 2 and 3. In general response 2, we offer that the BadDiffusion would work poorly for ODE samplers and only works on DDPM, while VillanDiffusion performs well. In general response 3, we also show that the randomness of samplers is the critical factor that affects the performance of the backdoors because when the randomness of samplers decreases, the MSE of VillanDiffusion trained for ODE goes down, but BadDiffusion goes up. It also offers empirical evidence for our theory. **[Compare to Other Baselines]** Thank you for your advice. BadDiffusion could not work with ODE samplers because it actually describes an SDE, which is proved in our papers theoretically and in general responses 2 and 3 empirically. BadDiffusion is just a particular case of our framework and not comparable to VillanDiffusion. However, we still conduct an experiment to evaluate BadDiffusion on some ODE samplers and present the results in general response 2. We can see that BadDiffusion performs much more poorly than VillanDiffusion. Also, in general response 3, we point out that the leading cause of this phenomenon is the level of stochasticity and provide empirical evidence of our theory. As for TrojDiff and RickRolling the Artist, the authors modify the samplers and the text encoder to achieve backdoor attack respectively, so it has different threat models than us. Both methods are not comparable to our approach. This study is the first to explore backdoor attacks in many diffusion model configurations, such as ODE samplers and flexible noise schedulers. Our study is also the first line of works exploring conditional generation. **[Line 191 Flaws]** Your valuable advice is much appreciated. We will revise the sentence as the following: When we compare it to the learned reversed process of SDE Eq. (11), we can see that the diffusion model $\epsilon_{\theta}$ should learn the backdoor score function to generate the backdoor target distribution $q(\mathbf{x}_{0}’)$. --- Rebuttal Comment 1.1: Comment: Most of concerns are solved and I raised my scores. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's response and for increasing your rating score!
Summary: The paper presents VillanDiffusion, a framework for analyzing backdoor attacks on different types of diffusion models (DMs). VillanDiffusion covers various DM configurations such as unconditional and conditional DMs or training-free samplers and provides new insights into caption-based backdoor attacks. Strengths: + originality, the paper presents a unified framework for analyzing backdoor attacks on DMs, covering various configurations and training-free samplers. the soundness of this paper is also noteworthy, with detailed proof in Appendix. + The experiments are comprehensive and demonstrate the effectiveness of the VillanDiffusion framework in detecting backdoor attacks on DMs. + The paper is well-structured, with each section building on the previous one, making it easy to follow. Weaknesses: - the effectiveness of their backdoor attack on Celeba is limited: the increased FID score is huge in this scenario, which does not show the advantage of their method against other baselines. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - how to read the figure 2a and 2e? on line 278, this paper says that "From Fig. 2a and Fig. 2e, we can see the FID score of the backdoored DM on CelebA-HQ-Dialog is slightly better than the clean one". However, it seems that data in Fig. 2a and Fig. 2e are the generated clean samples by the backdoored model. - lack of further analysis about the robustness of various configurations of DMs against backdoor attacks: since VillanDiffusion can generalize to different mechanisms, samplers, and schedulers, it is worthwhile to analyze how the different modules affects the robustness of DMs. - how DMs pretrained on cifar-10 become stronger in terms of utility after being fine-tuned on cifar-10? on line 298, this paper says that "We can see all samplers reach lower FID scores than the clean models under 70% poison rate for the image trigger Hat." Since the data used for pretraining and fine-tuning is the same, why does the generation ability of DMs still increase? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable suggestions. We will reply to your questions in the following. **[FID Score Increase]** Thank you for the constructive advice. To fully evaluate the threat of VillanDiffusion, we train the backdoor DDPM on CelebA-HQ with 20% poison rate and more training epochs (the original training epochs are 1500). We found that with 2000 training epochs and the UniPC sampler, the FID score will become 19.26, which is much lower than we report: 20.67. We believe that with sufficient training, the utility and specificity of the backdoor can get much better. We will update our better results in the future. **[Elaborate Figure 2(a) and 2(e)]** I am thankful for your valuable feedback. In Figure 2, we use empty quotation “” and green dots as the results of clean (backdoor-free) models. In the attachment, Figure 1, we mark the results with red boxes. In the Figure 1(a), we can see the FID scores of clean models trained on the CelebA-HQ-Dialog dataset are about 25, which are slightly higher than backdoored models. **[Further Analysis of the Robustness of Various Configurations]** We appreciate your precious comments. According to general responses 2 and 3, we’ve conducted experiments on BadDiffusion and VillanDiffusion with different samplers. We found that BadDiffusion is only effective in SDE samplers. When DDIM $\eta$ goes down, which means the sampler becomes more likely an ODE, the MSE of VillanDiffusion trained for ODE samplers would decrease, but BadDiffusion would increase. Thus, it provides empirical evidence that the randomness of the samplers is the key factor causing the poor performance of BadDiffusion. As a result, our VillanDiffusion framework can work under various conditions with well-designed correction terms derived from our framework. **[Poisoned DDPM Becomes Stronger]** I am thankful for the valuable input you've provided. We use the same pre-trained CIFAR10 DDPM model as BadDiffusion. According to Figure 2(b) in the BadDiffusion paper, they also present a similar phenomenon that many poisoned models achieved better FID scores than clean models. It might be caused by the non-optimal training of the models.
Rebuttal 1: Rebuttal: ## General Response Thanks for the insightful comments. We appreciate your precious reviews. Here, we will give a general response to common suggestions. **[Unlike standard backdoor attacks, backdoor diffusion models require modifying the diffusion process]** Based on the review comments regarding the difference of VillanDiffusion to existing backdoor attacks, there are two major points that we would like to emphasize: (1) regular data poisoning attacks for backdoor injection (e.g., only changing the training data and labels) are not effective on diffusion models. Diffusion models aim to learn cascading denoising processes, and the models would learn to remove specific levels of noise. We can see the loss function of DDPM in equation 7. It would add a specific amount of Gaussian noise $\hat{\beta}(t) \epsilon$ to images and remove $\epsilon$ noise from noisy images. With hundreds of steps, a meaningful pattern would finally emerge. Thus, simply poisoning the dataset with a fixed backdoor trigger and target without modifying the diffusion process would not inject trojans into diffusion models successfully. As a result, refer to villanDiffusion loss in line 200. Backdoored diffusion models need to learn how to remove specific levels of triggers $\frac{2 H(t)}{(1+\zeta) G^2(t)} r$. In addition, the removed levels of triggers vary over time $t$ based on the content $\hat{\alpha}(t)$ and noise scheduler $\hat{\beta}(t)$. In contrast, regular data poisoning would make the diffusion models learn to remove the triggers at once and cause wrong pattern accumulation. That is also why we need an additional correction term for backdooring diffusion models. (2) VillanDiffusion is a universal framework for any diffusion model following the diffusion process, and BadDiffusion is just a particular case under our framework. In addition, BadDiffusion only works on DDPM and ancestral sampling (the original sampler of DDPM) with well-designed and specific correction terms. Secondly, our framework incorporates the continuous view of diffusion models, like SDE, which has not been explored. It also provides a tool to analyze backdoor attacks for different configurations. Researchers can also investigate the risks of backdoor attacks on their own diffusion models by designing different correction schedulers following our framework. **[BadDiffusion fails to generalize to different configurations, while VillanDiffusion does not]** To further demonstrate the generality of VillanDiffusion and the limitation of BadDiffusion, we conduct experiments to show that backdooring DDPM with CIFAR10 and various ODE samplers (including UniPC, DPM-Solver, and DEIS, etc.) will not be successful with BadDiffusion loss, while using the correct loss derived from VillanDiffusion will be successful. Please refer to Table 2 in the author's rebuttal. In Table 2, we can find the MSE of BadDiffusion remains high and SSIM keeps low. That means BadDiffusion performs badly on ODE. **[Further Analysis of the Robustness of BadDiffusion and VillanDiffusion]** Here, we conduct an additional experiment. We evaluate the robustness of BadDiffusion and VillanDiffusion with different randomness ($\eta$) hyperparameters of DDIM samplers. We attach the numerical results of the experiment in Table 1 of the author's rebuttal. Also, we control the randomness of the DDIM sampler with hyperparameters $\eta$. When eta is 0, the DDIM sampler has no randomness and reduces to an ODE sampler. In contrast, when eta is 1, the DDIM sampler would become an SDE sampler. As a result, we can evaluate the effects of the randomness of samplers on the correction terms derived from ODE and SDE. As the figures show, when the $\eta$ goes down, and DDIM gets closer to an ODE sampler, the terms derived from ODE (VillanDiffusion) would become more effective, and ones of SDE (BadDiffusion) would worsen. Thus, we can see that the randomness of the samplers is the key factor causing the failure of BadDiffusion, as our theory presents. The results also show the necessity and novelty of our framework to implement successful backdoor attacks. Pdf: /pdf/4479debb0c0054b828c9c2aaed9371048cb63326.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Refutation of Shapley Values for Explainability
Reject
Summary: In this paper, the authors formally define five anomalies for an explainability score and prove that for every n >= 4, there exist Boolean classifiers defined over n features that exhibit one or more of these anomalies for the SHAP score. In this way, the authors provide evidence of the inadequacy of Shapley values for explainability. The aforementioned anomalies are defined by considering the concept of abductive explanation. More precisely, given a binary classification model M : {0,1}^n -> {0,1} and a tuple v in {0,1}^n, a subset X of {1, ..., n} is said to be a weak abductive explanation of (M,v) if for every y in {0,1}^n such that y[i] = v[i] for every i in X, it holds that M(y) = M(v). In other words, the values of v for the features in X are enough to obtain the same result as M(v), so they are enough to explain the output of M for v. Moreover, a subset X of {1, ..., n} is said to be an abductive explanation of (M,v) if X is a weak abductive explanation of (M,v), and there is no weak abductive explanation X' of (M,v) such that X' is a proper subset of X. In other words, X is an abductive explanation for (M,v) if X is a minimal weak abductive explanation for (M,v). Then a feature i is said to be relevant for (M,v) if there exists an abductive explanation X of (M,v) such that i belongs to X, and otherwise i is said to be irrelevant for (M,v). With this notion of irrelevance, the anomaly I5 for the SHAP score is defined as the existence of a feature i such that i is irrelevant for (M,v), but the absolute value of the SHAP score of i is greater than the absolute value of the SHAP score of every other feature. Thus, this can be considered as an anomaly of the SHAP score, as i is an irrelevant feature that is considered more relevant according to the SHAP score that all the other features (some of which are relevant). The other four anomalies considered in the paper (I1, I2, I3, I4) are defined in a similar fashion. Strengths: 1. The five notions of anomaly studied in the paper clearly represent anomalies for explainability scores. These notions are properly formalized in the paper. 2. The paper provides valuable insights into the SHAP score, specifically providing a formal framework to assess its adequacy as an explainability score. 3. The paper provides one of the first formal results of the inadequacy of Shapley values for explainability. 4. The paper is well written. Weaknesses: 1. The results of the paper show that a tiny proportion of the Boolean classifiers defined over n features exhibit some of the anomalies I1, I2, I3, I4 or I5. For example, the paper proves that at least 2^{2^{n-1} - n - 3} Boolean classifiers exhibits anomaly I1, which is a tiny proportion of the 2^{2^n} possible Boolean classifiers defined over n features. Hence, it could be the case that the vast majority of Boolean classifiers do not exhibit the anomalies studied in the paper. 2. In practice Boolean classifiers are given in some specific formalism, such as decision trees or binary decision diagrams. The authors do not provide any results about the formalisms that are suitable to express the Boolean functions exhibiting anomalies. For example, is it possible to express the Boolean functions in the proofs of Propositions 3, 4, 5 and 6 as decision trees of polynomial size in the number n of features? If this is not possible, can these functions be expressed as FBDDs (or d-DNNFs) of polynomial size in the number n of features? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Could you please comment on the point 1. and 2. mentioned in Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The following are the main limitations of this work (see Weaknesses), which are not addressed in the paper. - The results of the paper show that a tiny proportion of the Boolean classifiers defined over n features exhibits some of the anomalies I1, I2, I3, I4 or I5. - The authors do not provide any results about the practical formalisms (such as decision trees) that are suitable to express the Boolean functions exhibiting anomalies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: There is a misunderstanding in the review. The bounds proved in the paper are *lower* bounds, and it is stated in the paper that these are fairly loose lower bounds. The goal of these bounds is solely to establish that the number of boolean classifiers for which Shapley values exhibit some sort of issue is non-negligible. Q1. We dispute the claim made by the reviewer that the number of boolean classifiers is "tiny". First, this is not the case because the paper only aims at proposing lower bounds on the numbers of such classifiers. Also, and as stated in the paper, the lower bounds do not aim to be tight. Quoting from our paper: "we can prove the following (fairly loose) lower bounds on the number of functions exhibiting the different issues". Second, the number is not "tiny" because, as demonstrated by the experimental results reported in reference [35], for some of the issues with Shapley values, almost *all* boolean classifiers exhibit those issues. Furthermore, even if the number of boolean classifiers were indeed "tiny", the results in our paper prove that one of the most widely used explainability methods can produce misleading information regarding relative feature importance, for arbitrarily many boolean classifiers. Even if the number of such classifiers was indeed negligible (and it is not), the fact that the theoretical foundation of several explainability methods is flawed is reason for serious concern. Q2: This is an interesting question, but one that it orthogonal to the goals of the paper. The paper proves that there are arbitrarily many boolean classifiers for which Shapley values will give misleading information regarding relative feature importance. Also, given the experimental results in [35], it is guaranteed that there exist many boolean classifiers, many of which are easy to represent either with decision trees or with tractable circuits, and for which the size of the representation is polynomial on the number of features. We can add this comment to the paper, but it does not affect the paper's claims in any way. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your answer. I understand that the bounds provided in the paper represent lower bounds and are not tight. My point is that the bounds proved in the paper show that a tiny proportion of Boolean classifiers, defined over n features, exhibit some anomalies. More formally, if f(n) is this fraction as a function of n, then lim_{n -> infinity} f(n) = 0. Obviously, this does not preclude the existence of stronger lower bounds that demonstrate that a non-negligible proportion of Boolean classifiers exhibit some anomalies. While I believe this paper offers valuable insights, I still think it is not ready for publication. It does not show either that a significant proportion of Boolean classifiers exhibit the anomalies discussed in the paper or that popular formalisms for Boolean classifiers display such anomalies. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the comments. However, we disagree with some of the comments. First, and as stated in our rebuttal, it is already known that, for boolean classifiers with 4 features, some of the issues studied in our paper occur in almost *all* classifiers. So, it is already known that the issues studied in our paper occur in a *large* fraction (in fact in *most$) boolean classifiers. To be clear, issue I1 is identified in 99.67% of all boolean classifiers with 4 features. Issue I2 is identified in 61.72% of all boolean classifiers with 4 features. These numbers clearly indicate that the issues reported in the paper are most often observed. We can include an extended table with these results, which prove that the issues studied in our paper occur in most of the boolean classifiers one can think about. Second, and also as stated in our rebuttal, for some of the classifiers that exhibit one or more of the reported issues, their representation is polynomial on the number of features. This is really not an issue. Third, and as stated in a comment to another reviewer, let us agree that to disprove a theory a single counterexample suffices. Earlier work revealed the existence of issues for a large fraction of boolean classifiers with four features. This result might be challenged because of the *fixed* number of features. Our paper proves that the issues reported in earlier work, and also a number of additional issues, occur in arbitrary many boolean classifiers. This disproves the existing theory on Shapley values for XAI, independently of how frequently those issues might occur (and existing results prove that they occur in almost all boolean classifiers with four features). We will be happy to provide additional clarifications, but we feel that the two criticisms raised by the reviewer have been sufficiently deconstructed.
Summary: This paper reviews previous work on ideas of feature importance and hi-lighted inconsistencies with Shaley values. It defines ideas of importance and irrelevance of features in a Boolean ML model. These definitions are based on the idea of a minimal set of inputs needed to freeze an model output. necessary inputs are in every minimal coalition that can freeze the output, relevant inputs are in at least one minimal coalition, and irrelevant inputs are in no coalitions. They then go on to show that, among other issues, there exist Boolean models and certain inputs where irrelevant inputs are given large Shapley values, while relevant inputs are given a Shapley value of zero. Thus, the logic goes, Shapley values do not track importance. The paper's original contributions are to prove that model/input pairs with issues exist/can be found for models of any input size. Previously, only small models were exhibited to have these issues, but it was unknown if larger models also had these issues. They also give lower bounds on the number of models that have these issues. Strengths: - Generally clear and straightforward exposition. - Good background and presentation of previous results. - Results are easy to understand. - idea of necessary, relevant, and irrelevant is intuitive. Weaknesses: - Paper is based on a comparison of apples to oranges, without an in-depth analysis of the issue. It is possible that the whole paper is based on a misunderstanding. Further analysis is needed. - Some grammatical issues. - Contributions are not very significant. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I believe, based on my calculations, that the paper's definition of Shapley Values in equation (3) is incorrect. For example, take model $F(x_1, x_2) = x_1$. I get that the Shapely values of the input $(1,1)$, assuming the baseline is $(0,0)$, is 1/2 for $x_1$ and 0 for $x_2$ for your definition. The Shapley value is actually 1 for $x_1$ and 0 for $x_2$, based on widely known definitions. The Shapley value satisfies the axiom of completeness, or efficiency, so the definition you gave is definitely not correct if it indeed gives the values 1/2 and 0. - The heart of the paper is a critique of Shapley values. It is asserted that there is a metric of feature importance and irrelevance and that the Shapley value assigns importance to what this other metric indicates is not important, and assigns no importance to what this other metric identifies as important. Admittedly this critique is from another paper, and this paper builds on this critique. Fundamentally, the Shaley Value indicates a feature's contribution to function value change in comparison to the function evaluated at the comparative baseline input. The metrics of AXp and CXp are metrics of the ability of inputs to determine or alter a model at an input. One is about function change from a baseline, the other is about fixing or altering an output. There does not seem to be any analysis as to whether these notions are tracking the same underlying, and undefined, concept of "importance." This paper defined "important" one way, while Shaley values define it another way. Is the problem that Shapley values do not indicate importance, or is the problem that Shapley values indicate importance according to one definition, while people are confused and think it indicates importance by another definition? The second case would account for this papers results, while problematizing the conclusion that the Shapley values do not indicate importance at all. This question needs further investigation and exposition in the paper, I believe. As an illustration, let $F$ be a model with one input defined as $F(x) = x$. Also, let the input of consideration be $x=0$. By the definition of the paper, $x$ very important because $x$ is a "necessary" input. However, The Shaley value of $x$ is $0$ at the input $0$. An inadequate analysis of this results it that the Shapley value indicated zero importance to a "necessary" input, so the Shaley value does not track importance. An equally inadequate analysis, opposite of the first, is that a "necessary" input did not cause any function change, so the idea of a "necessary" input is flawed. However, a more sophisticated analysis is that necessary inputs can have no contribution to a model changing from the baseline. Shapley values measure one thing, and "necessary" inputs measure another. In summary, it appears possible that this paper deals with two different metrics of "important," that these metrics disagree, but also, the two metrics measure different things and work in different ways. Shapley values do not indicate importance according to this paper's definition, but why is that necessarily an issue with Shapley values? Minor issues: - (39) "contributions of features to explainability". Do you mean function output? - (97) $2^\mathcal{F}$ should be $2^{|\mathcal{F}|}$ ? Sometimes you use one, sometimes another. - (108) "did" contribute", not "can" contribute - (117-118) "which corresponds to a PI-explanations" odd grammar. - please define A PI-explanations, prime implicants. Seems that unknowns are defined in terms of unknowns. - Unsure if WAXp is a function? - I1-I5 are stated in what appears to be sentence fragments. - I5 -> I2, but also, I5-> I1, and I4 -> I3, I1, and I2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author has not discussed the limitations of the claim that Shapley values are refuted. This statement seems not entirely supported. See questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the in-depth review. We feel there is a misunderstanding in what is being proved. Our work builds on the definition of Shapley values for XAI studied in recent work, namely references [7,8,21,22], but also the more recently published paper: [78] M. Arenas, P. Barceló, L. E. Bertossi, M. Monet: On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results. J. Mach. Learn. Res. 24: 63:1-63:58 (2023) These papers are based on the NeurIPS'17 paper by Lundberg&Lee (reference [47] in our paper), which builds on earlier work on the same topic. The definition of Shapley values for XAI used in our paper is taken verbatim from those papers, specifically [7,8,78] and indirectly [21,22]. Furthermore, we underscore that existing bibliography concurs with our interpretation of Shapley values (for XAI) as a measure of feature importance, and the meaning of 'importance'. Concretely, references [8,78] read: "Thus, SHAP(M,e,x) is a weighted average of the contribution of feature x on e to the classification result, ...". Furthermore, references [21,22] read: "Finally, the SHAP explanation computes a score for each feature $X\in\mathbf{X}$ averaged over all possible contexts, and thus measures the influence feature X has on the outcome." More importantly, our paper also includes quotes from references [64] and [65] further supporting this interpretation of Shapley values (for XAI). To be clear: the interpretation of Shapley value as a measure of feature importance in those papers and in our paper is exactly the same. Thus, the comparison is not 'apples to oranges'; quite the contrary. We ask the review to be changed accordingly. We also underscore that relevancy has been studied in logic-based abduction studied since the 90s, e.g. reference [23] in our paper. As stated in [23] (and adapting to XAI), relevant features occur in some acceptable explanation; whereas irrelevant features do not occur in any. AXp's are concerned with minimal conditions for prediction sufficiency; irrelevant features do not occur in any AXp. CXp's are concerned with minimal conditions for prediction change; irrelevant features do not occur in any CXp. The key point here is that irrelevant features play no role whatsoever, neither in prediction sufficiency, nor in prediction change. Thus, assigning no importance to relevant features is misleading; and assigning importance to irrelevant features is also misleading. The reviewer makes the valid point that there exist other interpretations of Shapley values for XAI where the set function is defined differently, and where baselines are considered. Besides the NeurIPS'17 paper, where the concept of base value is described, there are other works formalized the use of baselines, namely: D. Janzing, L. Minorics, P. Blöbaum: Feature relevance quantification in explainable AI: A causal problem. AISTATS 2020: 2907-2916 M. Sundararajan, A. Najmi: The Many Shapley Values for Model Explanation. ICML 2020: 9269-9278 However, considering different baselines is not the focus of our work, as it would be close to impossible to refute all the different heuristics that have been proposed when approximating Shapley values for XAI. Our paper focuses on a simple, yet rigorous, definition of Shapley values for explainability, as clarified above. Furthermore, our paper reveals the limitations of such a definition. We claim that this theoretical framework suffices, because the goal is to prove a counterexample to validity, and no sound theory withstands a single counterexample to validity. Nevertheless, in the updated version of the paper, we will include a statement to that effect: "The paper proves that a widely used definition of Shapley values for XAI can produce misleading information regarding relative feature importance. Nevertheless, it is left open whether other definitions of Shapley values, e.g. those based on considering different baselines [Refs], might circumvent the issues with Shapley values reported in this paper." Given the above, there is nothing incorrect in our paper. Our definition of Shapley values (for XAI) follows verbatim the definitions in earlier work (see [7,8,21,22]), including equation (3). The values computed with equation (3) are correct, given the definitions in our and in the earlier papers [7,8,21,22]. Finally, we disagree with the reviewer regarding the comment: 'However, a more sophisticated analysis is that necessary inputs can have no contribution to a model changing from the baseline. Shapley values measure one thing, and "necessary" inputs measure another.' As with the rest of this review, the comment considers a concrete definition of the set function for Shapley values (for XAI) which is not the one our work is based on, and which is not the one used in the references cited in our paper. For the example given, the only way for a model to change the predicted value is to change the value of x. The Shapley value that we obtain in this case is -0.5 and the feature is relevant because it occurs in some AXp/CXp. So, all this makes complete sense. As stated earlier, coonsidering different baselines is not the focus of our paper, given the works our paper builds upon. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: We thank the author for their clarification of the Shapley value. We agree that the formulation of Shapley values is in accord with previous literature, i.e. Lundberg and Lee (2017), and withdraw comments about the incorrectness of equation 3. Regarding the refutation of the "apples-to-oranges comparison" claim, we are not convinced. We agree that the paper, if correct, shows that for arbitrarily large domains, Shapley Values may not indicate necessary or irrelevant features. We disagree that this is a refutation of Shapley values for ML explainability. 1) It may be that Shapley values both ARE useful and legitimate for ML explainability AND do not track necessary or irrelevant features. This is because the purpose of Shapley values is to indicate feature contribution to function change relative to an input baseline. This purpose may be separate from necessary and irrelevant features. 2) Regarding the provided quotes: the word "importance" is not in any quote, only feature "contribution" and "influence." These quotes claim that Shapley values track feature contribution and influence, which different and more specific than feature importance. It is our opinion that the idea of "importance" is vague, and that when we take a high-resolution view of the matter, the issues seem to dissolve. 3) We conceded that some popular opinions may hold that Shapley values indicate feature importance, which is false for certain definition of importance. We disagree that the proper remedy is to assert Shapley values are not at all useful for ML explainability. We rather advocate for a clarification of what Shapley values are, and what they are not. We recommend that the paper move away from the "refutation" claim, and instead state the claim as: Shapley values are "incompatible" with a certain notion of feature importance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful comment, and for acknowledging that there is nothing wrong with our paper. However, we disagree with the reviewer in some of the comments made. 1. The reviewer states: "This is because the purpose of Shapley values is to indicate feature contribution to function change relative to an input baseline". As clarified in our rebuttal, we consider an existing definition of Shapley values for XAI with does *not* consider an input baseline. Furthermore, the papers upon which our work is based also do not consider an input baseline. Also, as stated in our rebuttal, our interpretation of the meaning of Shapley values for XAI is *exactly* the one used in those papers. 2. As already stated in our rebuttal, we will explicitly acknowledge that our refutation of Shapley values for XAI applies to an existing definition and interpretation of feature importance. However, for that definition and interpretation, what our paper establishes is indeed a refutation of Shapley values for XAI. Future work, ours or by others, will analyze the alternative definition of Shapley values that considers a baseline, given the now known result that, for some definitions of Shapley values for XAI, the obtained measures of feature importance are provably misleading. We will be happy to provide any additional clarifications, but we feel that the main criticisms raised by the reviewer have been been addressed.
Summary: The paper demonstrates / constructs functions with features whose Shapley values (i.e., attributive importance in a prediction) is misaligned with their true relevance. Strengths: - Addresses a theoretical gap in our understanding of Shapley values. Weaknesses: - I find the problem being investigated to be mostly a mathematical curiosity that so happened to be open and has now been addressed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: n/a Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: This "review" is unacceptable in any credible conference. We fail to see how this "review" can even be accepted as a review. There are no concrete comments on the submitted work. The only stated strength reads: "Addresses a theoretical gap in our understanding of Shapley values." This is not true. Our paper does not address a theoretical gap in the understanding of Shapley values. It proves that Shapley values can produce misleading information regarding relative feature importance, for arbitrary many classifiers. Our paper does not address any 'gap'. Furthermore, the only listed weakness is a baseless statement about some 'mathematical curiosity', which has nothing to do with the focus of our paper. A paper that proves that, what thousands of earlier papers have used the theoretical justification for explainability, can produce misleading information is not a 'mathematical curiosity'. The lack of quality of this "review' has been reported to the Area Chairs, Senior Area Chairs and PC Chairs. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I acknowledge having read the response of the authors. I will try to elaborate more on my two points: 1. A "theoretical gap" is a missing piece in our understanding / knowledge of something. Prior to this work, we had a gap in our understanding / knowledge on whether "Shapley values can produce misleading information regarding relative feature importance, for arbitrary many classifiers". We didn't know whether this statement was true or not. Now, because of this paper, we know. The paper has closed this gap in our understanding / knowledge. I am not sure why the authors take issue with the claim that their work addresses a theoretical gap. How can this claim be possibly construed in a negative way? 2. The rebuttal claims that the paper proves "what thousands of earlier papers have used the theoretical justification for explainability, can produce misleading information". We already knew that. The paper already acknowledges that much in the first sentence of its abstract: "Recent work demonstrated the existence of Boolean functions for which Shapley values provide misleading information about the relative importance of features in rule-based explanations.". So, we knew that Shapley values "can produce misleading information". Much more modestly than what the rebuttal claims, this paper shows that Shapley values can not only produce misleading information as known, but can do so for arbitrarily many functions. I found this particular result to be of little consequence. The functions are artificially constructed, and the result, therefore, has no apparent implication on how widespread the problem with Shapley values (that we already knew was there) really is in practice. In my opinion, which I was asked to offer as a reviewer, the result of the paper is a mathematical curiosity: can we extend the existence result of previous work, to an "arbitrarily-many cases" result? The answer seems to be positive, without any obvious real-life repercussions. I stand to be corrected if the arbitrarily many functions considered in this paper somehow relate to functions used in the relevant literature. P.S. I thank the authors for being forthcoming in terms of reporting this review. --- Reply to Comment 1.1.1: Comment: Fact: the reviewer wrote two sentences in his/her "review". Given the reviewer's comment, we conclude that there was a lot the authors had to infer from those two lines. That is not how reviews are written, again not in any credible conference. Also, it should be said that some of the reviewer's comments are similar to the criticisms made by the other reviewers. We take this as a coincidence; but those comments should have been included when the review was written, not as an afterthought. What the reviewer claims that we (perhaps everybody working in XAI?) seem(s) to "know" comes from an arxiv preprint, never published. We emphasize: that preprint has not been published. Given the thousands of papers already published on Shapley values, and the hundreds that continue to being published almost every week, and given the ongoing proposed high-stakes uses of Shapley values, it seems evident that an arxiv preprint will not suffice to make sure that everybody understands what everybody seems to "know". The comments made by the reviewer, which were sadly not included in the review, merit a reply. 1. To close a "theoretical gap", one has to start from a sound theory. What reference [35] suggests, and this paper effectively demonstrates, is that Shapley values for XAI are unsound. So, there is no theory to start with, and so there is no gap to close. 2. As clarified in our paper, the fact that issues with Shapley values for XAI occur for boolean classifiers with four features could represent some sort of special case. For example, one might try to detect those special cases and then claim to have a sound theory. This paper proves otherwise, in that the number of special cases is unbounded. Evidently, that is why the result matters. All this is stated in our paper. 3. To disprove a theory, a single counterexample suffices. In the case of Shapley values for XAI, one might circumvent a few special cases. Therefore, the goal of our paper is to prove that that cannot be done, and so the theory of Shapley values for XAI is indeed unsound. The experimental results from [35], which the reviewer claims to be "known", show that the number of boolean classifiers exhibiting issues is actually massive, for classifiers with four features. So, asking "if the arbitrarily many functions considered in this paper somehow relate to functions used in the relevant literature" is really a moot point. Because of our paper, the use of Shapley values for XAI has now been disproved as a general theory supporting approaches for relative feature importance. However, if Shapley values for XAI were somehow to miraculously work for "functions used in the relevant literature", that should now be proved, given the now proved fact that Shapley values for XAI are generally unsound.
Summary: Based on definitions of feature necessity, relevancy, and irrelevancy from previous work,as well as systematic issues with Shapley values for explainability on boolean classifiers (e.g. non-zero Shapley values assigned to irrelevant features, zero Shapley values assigned to relevant features, among others) identified in previous work, the authors offer proof for their existence in functions with an arbitrary number of variables. They conclude that the existence of such systematic issues is cause for concern in using Shapley values for explainability, as misleading information about feature importance can induce errors in human decision making. Strengths: - Originality: The work offers proof for the existence of issues with Shapley value explanations on boolean functions with an arbitrary number of variables that were previously only studied empirically. - Quality and Clarity: The theoretical framework, preliminaries, and proofs are described in a very concise manner. Despite the theoretical nature of the paper, the authors are able to concisely state to the reader what is described in each formula (e.g. lines 125-127: Thus, given an instance (v, c), a (weak) AXp is a subset of features which, if fixed to the values dictated by v, then the prediction is guaranteed to be c, independently of the values assigned to the other features). Similarly, the main idea for each proof is described in a very intuitive manner, increasing readability of the paper significantly. - Significance: The present work proves systematic issues exhibited by Shapley value explanations on boolean functions. Shapley values are one of the most popular solutions, as they are based on clearly defined axioms, i.e. properties deemed desirable for explanations. For boolean functions, the present work shows that these axioms (which Shapley values do fulfill) may be lacking for treating irrelevant and relevant features as would be expected. Weaknesses: I am a bit concerned with the novelty, as the present work only provides proof for observations about unexpected behavior of Shapley value explanations for boolean functions that were already observed empirically in previous work (however, the authors also state themselves that these issues have been identified empirically in previous work). To raise concern about e.g. I1, it would be sufficient to simply identify a case where irrelevant values are assigned nonzero Shapley values. I also believe the title promises a bit more than is provided by the paper. The proofs and resulting claims are restricted to boolean functions, however, it would be interesting to see how and if the described issues occur in continuous settings, e.g., when explaining DNNs. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would be interested in how the findings translate to non-boolean functions, i.e., continuous setting. E.g. the definitions of relevant, irrelevant, and necessary features over classification change makes sense in the boolean setting, but not in a continuous setting where logit values easily change with features being added. Also, it would be interesting to see some concrete recommendations as to how the identified issues might be avoided. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: restriction to boolean functions, as described in "Weaknesses" section. I think a paragraph of how the described proofs and observations may impact Shapley value explanations in more real-world settings would go a long way here, as well as suggestions on how to mitigate the proven issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We underscore that our paper extends significantly the experimental results from [35]. The proofs that there exist arbitrary many boolean classifiers for which Shapley values give misleading relative feature importance offer a strong argument for why these results should be presented to a wider audience. Also, we expand the issues reported in [35], and prove that these also exist for arbitrary many boolean classifiers. Moreover, and to the best of our knowledge, the earlier experimental work (i.e. reference [35]) has not been presented at any conference. Given the importance of a result that refutes the validity of Shapley values in explainability, we believe that such a result should be made widely visible by presentation in a top-tier conference, especially when the refutation is being established for arbitrary many (boolean) classifiers. Answer to questions: Q1: First, the empirical classifier search described in earlier work (i.e. reference [35]) can hardly serve as the basis to answer this question. Second, the techniques proposed in our paper can be adapted to prove results for non-boolean cases. This requires understanding the proof techniques that we propose, and then considering generalizations to non-boolean functions with categorical features. This is the subject of future work. Q2: By understanding the limitations of existing definitions of Shapley values, it is now possible to modify those definitions in order to address those limitations. For example, there is recent work proposing an alternative to Shapley values, that relates with the research detailed on our paper. This is described in the following preprint: J. Yu, A. Ignatiev, P. J. Stuckey: On Formal Feature Attribution and Its Approximation. CoRR abs/2307.03380 (2023) An open direction of research, not addressed in the preprint above, is how to adapt the definition of Shapley values such that (ir)relevancy of features is accounted for.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models
Accept (poster)
Summary: In this paper, the authors developed a joint model of CLIP and LLM to solve the task of Visual relation detection. In this model, images are encoded into a triplet,~\ie, object, subject and spatial branches. Then it leverages large language models (LLMs) to generate description-based prompts (or visual cues) for each component. Experiments on four VRD benchmarks shows good results compared to the baseline models. Strengths: + Interesting and good applications of LLMs including GPT-3.5 and large multi-modality pertaining models~\eg, CLIP. + Show higher performance than baseline methods. + Good storytelling to readers understand the key idea. Weaknesses: ### Technical Novelty and main ideas 1. This paper pays its major attention to designing and using LLM~\ie, GPT-3.5 to facilitate the deduction of multi-modal models. Many contributions lie in the design and feeding prompts into LLMs. This contribution seems insignificant to me and seems not generalizable for future LLM using different architectures with GPT-3.5. Besides, the authors also lack deep investigation into the improvements of the Chain of Thought (CoT). Although the overall application and using LLMs seem interesting. but the solid contributions of this paper are not clear to me. ### Presentation and motivation issues The descriptions of CoT and experimental results are somehow unclear. Besides, the novelty of using CoT in this task is not sufficient. Designing prompts 2. The reviewers tested the same prompt using GPT-3.5, while in this case, with or without Chain of Thought (CoT) does not show significant differences. (with CoT: avg: 0.51, 0.3, 0.19 for s,o,p; w/o CoT: avg: 0.59, 0.33, 0.14 for s,o,p) The reviewer understands the results may not be stable but is still unclear about this case. 3. Why did Figure 5(a) choose 0.4, 0.4, 0.2? The authors should explain how `obviously unreasonable` on lines 188 - 190 is `obviously unreasonable`. ### Experimental issues 4. For Figure 2 and Figure 7 in the supplementary material. a) Where did the cues for the CLS baseline displayed by the sentence come from? Line 214 of the text states to use "relational CLasS-based hints (e.g., ride)", which are somehow inconsistent with the statements. Can the authors explain in more detail how the CLS baseline is tested? Where does the performance difference come from if two settings use the same prompts? 5. Although the newly proposed setting, the authors do not discuss other similar works. PEVL[1] and STIP[2]. Differences and relation discussion could help. [1] Yao, Y., Chen, Q., Zhang, A., Ji, W., Liu, Z., Chua, T. S., & Sun, M. (2022). PEVL: Position-enhanced pre-training and prompt tuning for vision-language models. _arXiv preprint arXiv:2205.11169_. [2] Zhang, Y., Pan, Y., Yao, T., Huang, R., Mei, T., & Chen, C. W. (2022). Exploring structure-aware transformer over interaction proposals for human-object interaction detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ (pp. 19548-19557). 6. The time efficiency of different compared methods in Tab.4 should also be provided. 7. [Experimental Description] For the conjecture in line 247, does deduction give evidence for a confusion matrix? The reviewer did a cursory check of the dataset and there are a total of 117871 annotations, These provided actions seem to be rare. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. Besides the technical novelty, this manuscript exist many unclear implementation problems and insufficient experimental results, especially a lack of discussion with other works using LLMs. I hope the authors could solve these concerns to make this paper more readable. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors clearly discussed the limitations of the proposed method. These discussions are fair. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We are willing to address all your questions. ## Technical Novelty and Main Ideas We appreciate your attention to prompt design for GPT-3.5. While the proposed prompts are tailored to GPT-3.5, the core idea of using compositional visual cues for VRD and utilizing prompts for weight allocation is a promising approach that can be extended to future LLMs with various architectures beyond GPT-3.5. Further elaboration on these contributions is available in the **global response**. ## Presentation and Motivation Issues In our study, we emphasize the varying importance of different components. Our main aim is to have LLMs assess the importance of subject, object, and spatial components. Reviewing examples from the reviewer, it's evident that the subject holds more importance than the object. To further validate this, we gathered statistics about the "look at" relation samples in VG dataset: 28 subject categories and 95 object categories. Such concentration on subjects implies their greater role, making the weights (0.4, 0.4, 0.2) for subject, object, and spatial components (equal weight for subject and object) appear less reasonable. We'll employ milder terms in place of "obviously unreasonable''. Although the results of weight assignment without CoT may in some cases be similar to those with CoT, the motivation for integrating CoT is to enable LLMs to "think before they act". This requires LLMs to ask themselves: "Why am I weighting this way?", which in most cases leads to reasonable weights. As suggested in a concurrent work [1], GPT may be focusing on the last sentence "The sum of weights must be 1.0!", ignoring other important context. We also investigated the effect of introducing CoT, as shown in Table_R 4. The experimental results demonstrated that introducing CoT achieves consistent improvements, which solidly proves its effectiveness. **Table_R 4**: Comparison with or without CoT on the VG dataset. |CoT|R@20|R@50|R@100|mR@20|mR@50|mR@100| |-----|------|------|-------|-------|-------|--------| |❌|9.5|17.3|24.6|10.2|18.0|25.6| |✔️|10.6|18.3|25.0|10.7|18.7|27.8| ## Experimental Issues - **Setting.** 1) *Cues for the CLS.* We'd like to clarify that the CLS baseline does not involve the utilization of visual cues (description-based prompts). 2) *CLS baseline.* In a manner similar to various previous object classification approaches, CLS employs relational-class-based prompts as discussed in Line 102. For instance, it adopts prompts like "[REL-CLS]-ing/ed", as illustrated by the example "riding" mentioned in Line 214. CLS obtains the classification score by calculating the similarity between the text feature of class-based prompt of each relation category and the visual feature. 3) *Difference.* In Figure 2 and Figure 7, the baseline "CLIP" (CLIP) relies solely on class-based prompts for generating predictions. While the method "CLIP with Visual Cues" (RECODE) takes advantage of both class-based prompts and description-based prompts. - **Discuss Similar Works.** Thank you for pointing out the relevance of discussing other similar works and providing references to PEVL, STIP, and other LLM work. - *PEVL.* It focuses on proposing a new pre-training approach for vision-language models, positioning itself as a competitor to CLIP. On the other hand, our work is centered around training-free zero-shot settings, which require no training data and enable direct predictions without any fine-tuning. - *STIP.* It emphasizes traditional fully-supervised methods for human-object interaction (HOI) detection, requiring training and evaluating on the same category set. In contrast, our approach focuses on training-free zero-shot settings, which do not rely on any training data. - *Other LLM work.* The primary application of LLMs has predominantly focused on object classification. Directly extending these methods to visual relation detection (VRD) remains limited (cf, Line 301-302). Our work distinguishes itself as the pioneering effort to explore LLMs in the VRD domain, harnessing both LLMs and vision-language models (VLMs) to address VRD tasks efficiently, effectively, and with interpretability. In the revised manuscript, we will include this comprehensive discussion. - **Time Efficiency.** We investigated the time efficiency of each component in RECODE in Table_R 5. Specifically, we calculate the time required to infer each triplet and take the average. Regarding visual cues, their impact on latency remains marginal, amounting to a mere 14.5ms. In terms of spatial features, the computation of similarity demands 13ms. Notably, the spatial similarity in RECODE can be precomputed, given the spatial image within a finite set. Consequently, the calculation of spatial similarity doesn't lead to an increase in inference time, as it's retrieved from the precomputed list. It is pertinent to highlight that the weighting strategy inherent in RECODE does not encompass feature extraction, thereby resulting in a latency close to 0. **Table_R 5**: Analysis of key components on the VG dataset. Time (ms) represents the computation time of each triplet. The (•) represents the time when spatial component is retrieved offline. |Cue|Spatial|Weight|R@20|R@50|R@100|mR@20|mR@50|mR@100|Time(ms)| |---------------|---------|--------|------|------|-------|-------|-------|--------|-----------| ||||7.2|10.9|13.2|9.4|14.0|17.6|46.7| |✔️|||7.4|12.3|16.6|9.0|14.0|19.5|61.2| |✔️|✔️||9.1|13.4|17.4|9.3|15.0|20.3|74.2(61.2)| |✔️||✔️|7.9|13.4|17.7|9.3|14.7|20.5|61.2| |✔️|✔️|✔️|**9.7**|**14.9**|**19.3**|**10.2**|**16.4**|**22.7**|74.2(61.2)| - **Confusion Matrix.** The conjecture in Line 247 is verified in the confusion matrix. As shown in Fig\._R4 (cf, **PDF file**), we found various similar appearances which may hinder the detection, e.g., "eat at" vs "sit at", "ride" vs "straddle", etc. [1] Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023. --- Rebuttal 2: Comment: Thank you for your careful and thorough review once again. The deadline for our discussion is approaching. If you have any further concerns or questions, welcome to discuss with us. --- Rebuttal Comment 2.1: Title: Replying to rebuttal of authors Comment: Thanks for the detailed response. The response addressed several of my questions but did not alleviate my concerns about the weight assignment. The authors gave an intuitive explanation and experiments for their CoT, however, there is a huge gap before the experimental evidence can support the explanation. The results can only show that the weights selected by the authors are effective, but cannot prove that in general, the weights assigned by the proposed CoT have a significant advantage compared to baseline and the used weights are not cherry-picked by humans. The same gap also shows in the selections of visual cues, as Reviewer CZ8p mentioned. As there are many unstated details between the GPT's output and the actual cues used, I would like to keep my original score. --- Reply to Comment 2.1.1: Title: Response (Part one) Comment: Thanks again for your careful review, but we still want to clarify the following points: - To reiterate our contribution, we emphasize that **our insight lies in recognizing the distinct significance of subjects, objects, and spatial visual cues**. A simple mean-based approach leads to suboptimal performance. Thus, we adopt GPT to assess the importance of all components with respect to each category. - Besides, if you have used the **web platform of ChatGPT**, you may not get the same results as we do. This is because the web platform keeps all the context (**the next output may refer to the previous input and output**), whereas our method clears the context on each query by using **API**. - Moreover, several papers have shown that CoT can produce more reasonable results. Inspired by concurrent works in LLMs, we introduce context learning to standardize the output format (examples are given "painted on" in the code). In practice, GPT tends to generate the same weights as in the given example (0.4, 0.4 ,0.2), if we do NOT introduce CoT. We use a "for" loop to iteratively generate "looking at" weights using the two provided code segments (cf, **code for generating weights in Response Part Two**). The resulting ten consecutive output sets serve as concrete evidence to prove that CoT is more reasonable for our experiment. In order not to violate the review principle, we will return the results of running jupyter to AC in the form of anonymous link to ensure the authenticity of the results in the following Tables. **Table: With CoT** | Run Times | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | sub weght | 0.5 | 0.6 | 0.5 | 0.5 | 0.6 | 0.4 | 0.5 | 0.5 | 0.5 | 0.6 | | obj weght | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.4 | 0.3 | 0.3 | 0.3 | 0.3 | | pos weght | 0.2 | 0.1 | 0.2 | 0.2 | 0.1 | 0.2 | 0.2 | 0.2 | 0.2 | 0.1 | **Table: Without CoT** | Run Times | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | sub weght | 0.4 | 0.4 | 0.4 | 0.4 | 0.5 | 0.4 | 0.4 | 0.4 | 0.5 | 0.3 | | obj weght | 0.4 | 0.4 | 0.4 | 0.4 | 0.3 | 0.4 | 0.4 | 0.4 | 0.3 | 0.4 | | pos weght | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.3 | - In addition, we provide the code for generating visual cues at **code for generating visual cues in Response Part Three**. - Reviewer CZ8p has affirmed that we have addressed the majority of the concerns. If you are unclear about any of the steps in the implementation details, please don't hesitate to discuss with us. We are more than willing to address any questions you may have. --- Reply to Comment 2.1.2: Title: Response (Part two) Comment: **Code of Weight Generation with CoT** ```python import openai import json import time import random openai.api_key = "YOUR KEY" sub_feats = ["with eyes directed towards the object, with head upright"] obj_feats = ["with visible features such as front, display, or screen"] pos_feats = ["subject positioned either above, below, left or right of the object at a mid distance"] prompts = ''' Suppose you are a visual relation(predicate) classification model. Given: subject belongs to [product] and object belongs to [product]. The visual features of subject: ['with a flat surface', 'with colors or designs']. The visual features of object: ['the painted design or image may cover all or part of the its body']. The visual features of position: ['subject is placed on the surface of the object']. Q: How do you weight these visual features(subject, object, position) to determine the predicate is "painted on"? The sum of weights must be 1.0! A: Let's think step by step! First, we need to determine which visual feature is the most important for identifying "painted on" as the predicate. From the given visual features, it seems like the presence of the painted design or image on the object may be the most significant indicators of "painted on". However, the fact that the subject is placed on the surface of the object is also important. Based on this assessment, we can assign weights to each visual feature as follows: Weight("painted on") = 0.4 * Weight(visual features of subject) + 0.4 * Weight(visual features of object) + 0.2 * Weight(visual features of position). Given: subject belongs to [{}] and object belongs to [{}]. The visual features of subject: {}. The visual features of object: {}. The visual features of position: {}. Q: How do you weight these visual features(subject, object, position) to determine the predicate is "{}? The sum of weights must be 1.0! A: Let's think step by step! '''.format('animal', 'product', sub_feats, obj_feats, pos_feats,'looking at') messages=[ {"role": "user", "content": prompts} ] try: rsp = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, timeout=10, request_timeout=30, ) rsp = json.loads(json.dumps(rsp)) content = rsp['choices'][0]['message']['content'] rel_text = content print(rel_text) except Exception as e: print(e.args) time.sleep(60) ``` **Code of Weight Generation without CoT** ```python sub_feats = ["with eyes directed towards the object, with head upright"] obj_feats = ["with visible features such as front, display, or screen"] pos_feats = ["subject positioned either above, below, left or right of the object at a mid distance"] prompts = ''' Suppose you are a visual relation(predicate) classification model. Given: subject belongs to [product] and object belongs to [product]. The visual features of subject: ['with a flat surface', 'with colors or designs']. The visual features of object: ['the painted design or image may cover all or part of the its body']. The visual features of position: ['subject is placed on the surface of the object']. Q: How do you weight these visual features(subject, object, position) to determine the predicate is "painted on"? The sum of weights must be 1.0! A: Weight("painted on") = 0.4 * Weight(visual features of subject) + 0.4 * Weight(visual features of object) + 0.2 * Weight(visual features of position). Given: subject belongs to [{}] and object belongs to [{}]. The visual features of subject: {}. The visual features of object: {}. The visual features of position: {}. Q: How do you weight these visual features(subject, object, position) to determine the predicate is "{}? The sum of weights must be 1.0! '''.format('animal', 'product', sub_feats, obj_feats, pos_feats,'looking at') messages=[ {"role": "user", "content": prompts} ] try: rsp = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, timeout=10, request_timeout=30, ) rsp = json.loads(json.dumps(rsp)) content = rsp['choices'][0]['message']['content'] rel_text = content print(rel_text) except Exception as e: print(e.args) time.sleep(60) ``` --- Reply to Comment 2.1.3: Title: Response (Part three) Comment: **Code of Visual Cues Generation** ```python example_triplets = ['carrying_product_human', 'carrying_product_animal', 'carrying_animal_animal'] raw_rel_prompts_dict = {} for rel_sub_obj_key in example_triplets: rel, sub, obj = rel_sub_obj_key.split('_') prompts = ''' Known: a visual triplet is formulated as [subject, predicate, object]. Note that: [position] must not include nouns other than subject and object! [position] must contain [orientation: ("above", "below", "left", "right", "inside"), shape: ("horizontal", "vertical", "square"), distance: ("small distance", "mid distance", "large distance")]! Describe the visual features of the predicate "sitting on" in a photo, when subject belongs to [human], object belongs to [product]: [subject]: - with legs. - with hip. [object]: - with flat surface. [position]: - square subject above horizontal object with a small distance. Describe the visual features of the predicate "{}" in a photo, when subject belongs to [{}], object belongs to [{}]: -'''.format(rel, sub, obj) messages=[ {"role": "user", "content": prompts} ] try: rsp = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=messages, timeout=10, request_timeout=30, ) rsp = json.loads(json.dumps(rsp)) content = rsp['choices'][0]['message']['content'] rel_text = content print(rel_sub_obj_key, rel_text) raw_rel_prompts_dict[rel_sub_obj_key] = rel_text except Exception as e: print(e.args) time.sleep(60) ```
Summary: This paper aims to address the VRD problem using LLMs. The paper decomposes the visual features into human, object, and spatial features, and designs prompts to generate visual cues that describe each of these types of visual features. The relation classification is established by calculating the distance between visual and semantic features, and dynamic weights generated by LLMs are also integrated to enhance the training process. Strengths: 1. This paper focuses on a critical issue in visual relationship detection tasks, exploring the potential of leveraging LLMs to enhance visual relationship understanding. 2. The paper introduces a novel and reasonable approach by decomposing each predicate category into human, object, and spatial descriptions. 3. The authors thoroughly investigate various approaches to enhance the quality of prompts, encompassing both the generation of visual cues and the improvement of weights. Weaknesses: 1. It appears that the visual cues employed in the main papers are presented as mere examples, leaving uncertainty regarding the specific visual cues utilized in the experiments. Furthermore, the visual cues depicted in Figures 3 and 4 exhibit notable differences, with Figure 4 generating more complex sentences. As a result, evaluating the quality of the visual cues based on the current evidence becomes challenging. 2. The RECODE's performance gain on the HICO-DET and V-COCO datasets is marginal, and the authors did not provide error bars in their reports. Additionally, the ablation studies were solely conducted on the VG dataset, which could have substantial differences compared to the HICO-DET dataset. Consequently, it is difficult to be convinced that the RECODE is as effective as claimed by the authors. 3. The main technical contribution of this paper lies in the development of specifically designed prompts. However, the improvements made to the prompts are relatively straightforward, and the utilization of CoT is a standard practice. 4. There are many incurious statements/claims. For example, in line 47-55, it is unclear why a person has to stand while holding an object.; in lines 67-69, it is not clarified why the act of holding depends on spatial factors. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could the authors provide additional examples of prompts and the corresponding visual cues generated by GPT that were utilized in the real experiments? 2. Could the authors present more empirical evidence to further support the benefits of RECODE? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We are willing to address all your questions. ## Clarification of Visual Cues - **Examples in Figure 3 and Figure 4.** The short prompts showcased in Figure 4 were primarily intended to demonstrate the enhanced accuracy achieved by the Guided Relation Component Description. In the real experiment, to standardize the output format, we carried out a **normalized visual cue description prompt** (cf, **Section B** in supplementary materials), as mentioned in footnote2 in Line 174. All of our experiments were conducted using these normalized prompts to maintain uniformity in our approach. We will revise this part to reduce misunderstanding. - **Extra visual cue examples.** We would like to draw your attention to the fact that Figure 2, Figure 6, and Figure 7 already showcase a selection of generated visual cues derived from the normalized visual cue description prompt. Besides, Fig\._R 2 (cf, **PDF file**) also shows extra visual cues generated by the normalized prompt. ## Empirical Evidence - **Error Bars.** As shown in Fig\._R 3 (cf, **PDF file**), we plotted the error bars for three splits on the HICO-DET dataset. Across the three distinct splits - Full, Rare, and Non-Rare - our model's performance showcases impressive stability, i.e., 32.5% to 32.8%, 33.18% to 33.33%, and 32.2% to 32.55% for the Full, Rare, and Non-Rare split, respectively. These minuscule error bars reflect the high degree of consistency in our results, indicating that the measured values are reliable and repeatable. - **Ablation studies on HICO-DET and VCOCO datasets.** We have extended our ablation studies to encompass both the HICO-DET and VCOCO datasets. The results of these ablation studies are presented in Table_R 3, providing a more comprehensive evaluation of the effectiveness and generalizability of our proposed RECODE method. **Table_R 3**: Ablation studies on the HICO-DET and VCOCO datasets. | Cue | Spatial | Weight | HICO-DET (Full) | HICO-DET (Rare) | HICO-DET (Non-Rare) | VCOCO (Scenario 1) | VCOCO (Scenario 2) | |---|---|---|---|---|---|---|---| | | | | 30.9 | 30.7 | 31.0 | 25.5 | 28.6 | | ✔️ | | | 32.5 | 33.0 | 32.2 | 25.8 | 28.9 | | ✔️ | ✔️ | | 32.6 | 33.0 | 32.4 | 25.7 | 28.8 | | ✔️ | |✔️ | 32.7 | 33.1 | 32.5 | 25.9 | 29.0 | | ✔️ | ✔️ | ✔️ | **32.7** | **33.2** | **32.5** | **26.0** | **29.0** | - **More comparisons with SOTA methods.** In Table_R 1 and Table_R 2, we also reported the results of combining different SOTA visual-language models and comparing with other SOTA SGG methods, which proves the effectiveness and universality of our method. ## Contributions Thank you for your thoughtful feedback on the main technical contribution of our paper. We appreciate the opportunity to clarify our focus and emphasize the key aspects of our work (cf, the **global response** for main contributions). - We point out that the challenges of relation sensitivity, spatial discriminability, and computational efficiency in zero-shot VRD. We achieve this by decomposing the task into three distinct components: subject, object, and spatial descriptions. Designing specific prompts is a feasible way to achieve this goal, which is not the central focus of our contribution. - We claim that the importance of each component description is different. Notably, we are the first work to utilize LLMs to effectively assign reasonable weights to these components. The CoT is employed solely to enhance the rationality of the weight generation process. ## "Incurious" Statements Regarding the statement in Lines 47-55, we sincerely apologize for any confusion caused. Our intention was not to assert an absolute requirement for a person to stand while holding an object. Rather, we aimed to convey that in many observed scenarios, individuals are more commonly seen standing while holding objects, and we used the phrase "**might be**" in Lines 47-55 to indicate this likelihood. Similarly, we appreciate the need for clarity in the statement made in Lines 67-69. We would like to emphasize that spatial cues can play a crucial role in distinguishing certain relations, such as "laying on" and "holding," where subject and object cues alone may not be sufficient for discrimination. We intend to revise this statement to better reflect our intent, underscoring the importance of spatial cues in the identification of specific relations. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. It has addressed most of my questions. However, the contribution in the technical side is still believed to be limited as stated in my previous review, while I acknowledge the introduction of LLM is interesting and effective. Based on such considerations, I’d like to recommend a borderline accept and suggest the final version to incorporate those clarifications and results stated in the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for raising score! We will incorporate the clarifications and results in the revised version. If you have any further questions or concerns, welcome to discuss with us. Your feedback is greatly appreciated!
Summary: This paper proposed a novel method for zero-shot visual relation detection by leveraging LLM (e.g. GPT) and VLM (e.g. CLIP). Specifically, the proposed approach decomposes each predicate category into subject, object and spatial component and enrich each section with the help of LLMs, which can generate the description-based visual cues to help distinguish semantically similar concepts. Different visual cues are used to enhance discriminability from different perspectives, and the authors again use LLM to assign weights to different components for effective fusion. Extensive experiments on four different datasets are provided to demonstrate the effectiveness and interpretability. Strengths: 1. The proposed approach is theoretically sound and intuitive. Enriching the prompt from class-based to description-based can provide more information to enhance the relation sensitivity, and it also improves the explainability as the relation classification score can reveal the most important factors for the prediction. 2. The decomposition of subject-object pair makes it much more efficient for processing visual signals as the previous O(N^2) patches now reduce to O(N). The spatial relationship also makes sense as an abstract from real objects to just the relations. 3. The paper is well-written and easy to follow. The extensive experiments and ablation studies/visualization help a lot for understanding the model. Weaknesses: 1. The most important issues with this paper is that the evaluation section does not have important baselines. Specifically, in Table 1 and Table 2 the authors only show the performance of the proposed model with simplified version (CLS and CLSDE), which more like an ablation study. Many previous work actually attempted similar tasks and have been experimenting on the same dataset, e.g. [1][2][3]. 2. In Table 2 I guess the bolded numbers should be the highest (best)? For HICO-DET, CLS has the same performance on "Rare" category with RECODE thus should be highlighted as well I think? 3. A very very minor issue: the zero-shot chain-of-thought prompt used in most literatures are "let's think step by step" not "let's think it step by step". Formal usage should be "think" or "think about it" or "think through it", rather than "think it". [1] https://arxiv.org/abs/1804.10660 [2] https://arxiv.org/abs/1707.09423v2 [3] https://arxiv.org/pdf/2004.00436.pdf Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Overall I think the paper is solid just that the evaluation is insufficient, as it doesn't have any comparison with previous methods. I will consider raising my scores if the comparison is provided in a revised version (I only listed a few papers in the weaknesses sections and I'm pretty sure they are not the latest ones, please compare against the most recent one/SOTA methods as it is more meaningful), Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: This paper doesn't discuss the limitations of the proposed methods, for example one important underlying assumption is the reliability of the LLM (responsible for decomposition and weight estimation) and the quality of the VLM model. I don't see obvious potential negative societal impact with this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We are willing to address all your questions. ## Comparison with More Baselines **Table_R2**: Comparison with SOTA VRD methods on the VG dataset. Note that none of these methods can be applied in the **training-free** zero-shot setting. | Model | No Training | Unseen Relation | Training Data Source | zR@20 | zR@50 | zR@100 | |---------------------|-------------|-----------------|----------------------|-------|-------|--------| | Motifs [1] | ❌ | ❌ | VG | 8.9 | 15.2 | 18.5 | | COACHER [2] | ❌ | ❌ | VG & ConceptNet | 28.2 | 34.1 | 37.2 | | DPL [3] | ❌ | ❌ | VG | 6.0 | 7.7 | 9.3 | | CaCao [4] | ❌ | ✔️ | VG & CC3M & COCO | 17.2 | 21.3 | 23.1 | | RECODE (ours) | ✔️ | ✔️ | -- | 8.2 | 16.1 | 23.2 | - **Comparison with Training-based Methods.** Here we compared the proposed **training-free** RECODE framework with those well-designed training-based ones. Note that such comparisons are **unfair** as training-based frameworks can learn the underlying patterns and data distribution from the training set. For completeness, we still reported the results and investigate the performance gap between training-based frameworks and RECODE. Specifically, we compared the proposed RECODE with several relevant baselines, including triplet-level zero-shot VRD [1, 2], few-shot VRD [3], and category-level zero-shot VRD [4]. Since all of them cannot detect relations without training, we reported Zero-shot Recall@K (**zR@K**), which only calculates the Recall@K for those unseen **triplet** categories. - Triplet-level zero-shot VRD methods: - Motifs [1] is a traditional strong baseline without explicitly modeling the nature of zero-shot. - COACHER [2] explicitly models the nature of zero-shot and takes the power of common sense from ConceptNet, resulting in better performance. - Few-shot VRD methods: - DPL [3] is a few-shot baseline, which mainly investigates making predictions with a few examples (here we evaluate 1-shot). - Category-level zero-shot VRD methods: - CaCao [4] also explicitly models the nature of zero-shot, and leverages language information from captions of CC3M and COCO for enhanced performance. Surprisingly, even without training, RECODE still achieves competitive results, with zR@20, zR@50, and zR@100 of 8.2%, 16.1%, and 23.2%, respectively. This signifies its potential in handling unseen categories, due to the effective visual cues and inference mechanisms. - **Generalization on More Training-free Baselines.** Furthermore, we reported the results of our method applied in different SOTA visual-language models in Table_R 1, which also proves the effectiveness of RECODE. ## Minor Error and Limitations We appreciate your suggestions and will address the issues in the revised version. Besides, we have discussed the limitations in **Section F** in Supplementary Material. [1] Neural motifs: Scene graph parsing with global context. In CVPR, 2018. \ [2] Zero-shot scene graph relation prediction through commonsense knowledge integration. In ECML PKDD, 2021. \ [3] Decomposed prototype learning for few-shot scene graph generation. arXiv preprint arXiv:2303.10863, 2023. \ [4] Visually-prompted language model for fine-grained scene graph generation in an open world. In ICCV, 2023. --- Rebuttal Comment 1.1: Title: Thanks for the additional experiments! Comment: I've read the rebuttal from the authors and are satisfied with the response. I don't have additional questions and will raise my rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your decision to raise the rating after reviewing our rebuttal. We are delighted that you found our response satisfactory. Thank you for your positive assessment of our work. --- Rebuttal 2: Comment: Thank you for your careful and thorough review once again. The deadline for our discussion is approaching. If you have any further concerns or questions, welcome to discuss with us.
Summary: This paper presents RECODE, a novel method for zero-shot visual relation detection (VRD), designed to address the shortcomings of models like CLIP in distinguishing subtle relation categories and spatial discriminability. RECODE leverages large language models (LLMs) to generate detailed description-based prompts for each relation class component, thereby enhancing VRD performance. The authors also introduce a chain-of-thought method that breaks down the problem into smaller parts for LLMs, thereby assigning reasonable weights for each component. The effectiveness and interpretability of the method are demonstrated through experiments on four benchmark datasets. Strengths: 1. The approach introduces a novel framework, called RECODE, for zero-shot VRD that addresses the limitations of traditional class-based prompts. It decomposes the visual features of a triplet into subject, object, and spatial features and generates detailed descriptions of visual cues for each relation category. The use of chain-of-thought prompting for generating reasonable weights is a unique and creative approach. 2. The approach leverages large language models (LLMs), specifically GPT-3.5-turbo and CLIP, for the generation of descriptions and similarity calculations. The use of LLMs provides a strong foundation for generating informative and accurate descriptions of visual cues. The evaluation is conducted on four benchmark datasets, and the results demonstrate significant improvements over baseline methods. 3. The paper provides clear descriptions and explanations of the proposed framework, including the visual feature decomposing, semantic feature decomposing, and relation classification steps. The process of generating descriptions of visual cues and weights using LLMs is well-described, and the chain-of-thought method is illustrated with examples. The evaluation metrics and experimental setup are clearly presented. 4. The proposed approach addresses the challenge of zero-shot VRD by improving the discriminability of similar relation categories. By incorporating specific visual cues and generating descriptions, the approach enhances the performance of relation classification. The experimental results show significant improvements over baseline methods, demonstrating the effectiveness and interpretability of the proposed approach. The approach has the potential to advance the field of VRD and contribute to applications such as image understanding, scene understanding, and human-computer interaction. Weaknesses: While the experimental results show improvements over baseline methods, the paper lacks a thorough analysis of failure cases. Understanding when and why the proposed approach fails to accurately predict relations is crucial for identifying its limitations and potential areas of improvement. Analyzing failure cases and providing insights into the challenges faced by the model would strengthen the evaluation and guide future research directions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper mentions the incorporation of specific visual cues to improve the discriminability of related categories. Could you provide more details on how these cues were selected? What criteria were used to determine their relevance and effectiveness? Additionally, did you consider any alternative visual cues during the experimentation process? Exploring different visual cues and discussing their impact could provide further insights into the effectiveness of the proposed approach. 2. The paper demonstrates improvements in relation classification for zero-shot VRD, but it would be valuable to discuss the scalability of the proposed approach. How does the performance scale with an increasing number of related categories and visual concepts? Are there any computational or efficiency limitations that arise when dealing with larger datasets or more complex scenes? 3. The paper should provide a clear justification for the choice of evaluation metrics used to assess the performance of the proposed approach. Are there any limitations or biases associated with the selected metrics? Additionally, it would be helpful to include a discussion on the limitations of these metrics in capturing the true performance of zero-shot VRD models. 4. While the paper discusses the experimental results and improvements over baseline methods, it would be valuable to have a section dedicated to the limitations of the proposed approach. Identifying and addressing these limitations can help guide future research directions. Additionally, it would be beneficial to have a discussion on potential extensions or improvements to the proposed approach that could further enhance its performance or broaden its applicability. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Auhors has not included the section or discussion on limitations and, if applicable, the potential negative social impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We are willing to address all your questions. ## Analysis of Failure Cases We sincerely appreciate the reviewer's valuable feedback and the suggestion to include a thorough analysis of failure cases in our paper. We have conducted an in-depth examination of failure cases in our proposed method. As shown in Fig\._R 2 (cf, **PDF file**), the descriptions of mismatch relations are consistent with the content of images (e.g., the armrests of the toilet), leading to mismatch yet *reasonable* predictions. We can observe certain scenarios where the relation between the subject and object could be interpreted in multiple ways, and both interpretations could be considered reasonable. For instance, in the case of the "girl-toilet", both "using" and "sitting on" are plausible given the context. However, the ground truth only contains one correct label. On the other hand, we also found instances where our method outperformed the ground truth annotations. For instance, in the case of "man-snow", our method accurately predicted the relation as "lying on" in Fig\._R 2. This observation highlights the robustness of our approach. ## Visual Cues Selection and Alternative In our work, we focus on a **training-free** setting (cf, the **global response** for clarifying experimental settings), which directly involves inference for zero-shot VRD. Due to this difficult setting, there are two challenges: 1) we are unable to learn visual cues by designing training objectives. 2) we are unable to evaluate the quality of visual cues rigorously. - **Cues Selection.** To tackle these issues, we propose a specific approach for selecting visual cues, as outlined in Eq\.(1). Specifically, for the subject, object, and spatial descriptions, we leverage the power of the GPT model to generate a random number of descriptions. Subsequently, we compute the similarity between these descriptions and their corresponding visual features, ultimately taking the mean of these similarity scores. Additionally, the weights assigned to the subject, object, and spatial location components are determined by the GPT model, which is detailed in Section 2.2.2. - **Alternative Visual Cues.** In our experiments, we have thoroughly considered the use of different prompts for generating visual cues, as explained in Section 2.2.1: (1) **Relation class description** (cf, Figure 4(a)) generates descriptions for each relation class directly. (2) **Relation component description** (cf, Figure 4(b)) generates descriptions for each component of the relation separately. (3) **Guided relation component description** (cf, Figure 4(c)) incorporates high-level object category to guide generation process. As shown in Table 1, our results demonstrate that the guided relation component description yields superior performance. ## Performance Scale and Computational Limitations In our proposed approach, the absence of a training phase makes the overall time consumption mainly driven by the inference process. As the number of related categories increases, the total visual descriptions over all relations also increases, the similarity computation between visual features and text features of these descriptions will contribute to the increase in time consumption (cf, Table\_R 5). However, given all categories, the time consumption of similarity calculation is also unavoidable for other zero-shot work. ## Evaluation Metrics The evaluation metrics used in our work, such as Recall@K (R@K) and Mean Recall@K (mR@K) for SGG datasets (VG and GQA) [1] and mean Average Precision (mAP) for HOI datasets (HOI-DET and V-COCO) [2], are **widely adopted in the field of (zero-shot) visual relation detection**. However, we acknowledge that there may be certain limitations or biases associated with these metrics. Specifically, due to the influence of long-tail distribution in the datasets, the R@K may exhibit a bias towards the head predicate categories (categories with many samples) [1]. This bias could potentially affect the overall evaluation results and may not fully capture the model's performance on rare or less frequent relation categories. Thus, we also report m@R as a reference. Although these metrics are already widely used, designing good evaluation metrics for VRD itself is still an open problem. ## Limitations We have discussed the limitations in **Section F** in Supplementary Material. [1] Unbiased scene graph generation from biased training. In CVPR, 2020. \ [2] Exploring structure-aware transformer over interaction proposals for human-object interaction detection. In CVPR, 2022. --- Rebuttal Comment 1.1: Comment: The authors have addressed the majority of my concerns. After evaluating their responses and taking into account feedback from other reviewers, I have chosen to uphold my original score. --- Reply to Comment 1.1.1: Comment: Thank you for the positive rating and for your thorough consideration of our responses. If you have any further questions or concerns, please don't hesitate to discuss them with us.
Rebuttal 1: Rebuttal: We appreciate the feedback from all reviewers. First of all, we would like to clarify and highlight our **different experimental settings** and **main contributions** over the existing work. Then, we will address all mentioned misunderstandings or questions from each reviewer individually. ## Different Experimental Settings - **Existing Zero-Shot Settings.** For the Visual Relation Detection (VRD) task (or Scene Graph Generation, SGG), there are several different "zero-shot" evaluation settings. More specifically, let $\mathcal{O}^{train}$ ($\mathcal{O}^{test}$) and $\mathcal{R}^{train}$ ($\mathcal{R}^{test}$) be the set of object categories and relation categories during the training (test) stage, respectively. Meanwhile, we use $\mathcal{T}^{train}$ and $\mathcal{T}^{test}$ to denote relation triplet categories, which are the combinations of object and relation categories (e.g., "man-riding-bike") in the training and test set, respectively. Currently, all existing "zero-shot" VRD/SGG work can further be categorized into two types: - **Triplet-level Zero-shot VRD (with training)** [1]**.** In this setting, object and relation categories remain consistent across training and inference, i.e., $\mathcal{O}^{train} = \mathcal{O}^{test}$ and $\mathcal{R}^{train} = \mathcal{R}^{test}$, while certain triplet categories remain unseen by the model in the test set, i.e., $\mathcal{T}^{train} \ne \mathcal{T}^{test}$. Since the model has acquired knowledge about all objects and relations, the emphasis lies in evaluating its ability to generalize to these novel triplets within $\mathcal{T}^{test}$. - **Category-level Zero-shot VRD (with training)** [2, 3]**.** Here, both object and relation categories differ between training and inference phases, i.e., $\mathcal{O}^{train} \neq \mathcal{O}^{test}$ and $\mathcal{R}^{train} \neq \mathcal{R}^{test}$. In addition, the triplet categories are also different, i.e., $\mathcal{T}^{train} \neq \mathcal{T}^{test}$. To detect objects and relations from a much larger and potentially unlimited set of possible categories, the model should learn distinguishable and generalizable knowledge from the limited training set. - **Our Training-free Zero-Shot Setting.** As above-mentioned, all existing zero-shot VRD/SGG work still *needs a training set for parameter learning*. In this work, we focus on a more challenging setting: **training-free zero-shot VRD, i.e., it solves VRD without any training stage**. Recently, the training-free paradigm has ushered in a new era to our community [4, 5, 6]. As for VRD, this new and challenging setting can notably reduce labor costs, particularly considering the complexities of manual labeling for relations [7]. On the other hand, it poses considerable challenges to perform such hard tasks without fine-tuning and labeled data. The differences between our new "zero-shot" setting and existing work are also illustrated in Fig\._R 1 (cf, **PDF file**). ## Main Contributions We understand the emphasis on the significance of our contributions might not have been sufficiently highlighted in the paper. We apologize for any confusion and would like to reiterate the main focus of our work: - **Compositional Visual Cues for VRD**: Our primary contribution lies in the utilization of compositional visual cues to facilitate the challenging task of training-free zero-shot VRD. Instead of solely focusing on designing prompts to generate better descriptions or visual cues, we proposed the RECODE method, which leverages large language models (LLMs) to generate detailed and informative descriptions for different components of relation categories, such as subject, object, and spatial cues. These descriptions serve as description-based prompts that assist the vision-language pretrained models (e.g., CLIP) in distinguishing between similar relation categories and improving VRD performance. - **Weight Assignment with LLMs and CoT**: In our work, we explored the importance of different components in relation categories, and recognized that their contributions are not equal. To address this, we introduced a novel approach using LLMs to assign reasonable weights for each component. The chain-of-thought (CoT) method was introduced as a guidance to generate rationales and weights, making the weight assignment more interpretable and reasonable. Note that CoT is not the main focus of our work; instead, it was employed as a tool to improve the quality and robustness of weight assignment. We hope that these clarifications will better communicate the significance and novelty of our work. We thank the reviewers once again for their valuable feedback, which has helped us improve the quality and clarity of our paper. [1] Zero-shot scene graph relation prediction through commonsense knowledge integration. In ECML PKDD, 2021.\ [2] Compositional prompt tuning with motion cues for open-vocabulary video relation detec- tion. In ICLR, 2023.\ [3] Visually-prompted language model for fine-grained scene graph generation in an open world. In ICCV, 2023.\ [4] Visual programming: Compositional visual reasoning without training. In CVPR, 2023.\ [5] Navgpt: Explicit reasoning in vision-and-language navigation with large language models. arXiv preprint arXiv:2305.16986, 2023. \ [6] Segment anything meets point tracking. arXiv preprint arXiv:2307.01197, 2023. \ [7] The devil is in the labels: Noisy label correction for robust scene graph generation. In CVPR, 2022. Pdf: /pdf/53b13eb94b5c1bca04c7d766b27c34928a1cdfc9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Naively utilizing CLIP with prevalent class-based prompts for zero-shot VRD has several weaknesses, e.g., it struggles to distinguish between fine-grained relation types and neglects essential spatial information of two objects. To this end, the authors propose a novel method for zero-shot VRD: RECODE, which solves RElation detection via COmposite DEscription prompts. Specifically, RECODE first decomposes each predicate category into subject, object, and spatial components. Then, it leverages large language models (LLMs) to generate description-based prompts (or visual cues) for each component. Different visual cues enhance the discriminability of similar relation categories from different perspectives, boosting performance in VRD. To dynamically fuse different cues, they introduce a chain-of-thought method that prompts LLMs to generate reasonable weights for different visual cues. Strengths: - The framework for decomposing visual cues and using LLM to separately generate prompts for subject, object, and spatial features seems novel. - The proposed method shows noticeable performance improvements, and the authors provided an ablation study to solidly analyze the design choices of the proposed method. Weaknesses: - The baselines in the experiments seem weak. Are the baseline methods recent enough models? To verify the effectiveness of the proposed method, the RECODE should be attached to the recent state-of-the-art model and show consistent performance improvement. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to the questions in the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I cannot find a potential negative societal impact in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. We are willing to address all your questions. ## Incorporated to More Recent SOTA Models **Table_R 1**: Performance of combining with different SOTA pre-trained visual-language models on VG dataset. CLS$^\star$ denotes the model uses class-based prompts to compute the training-free zero-shot similarity between the image and text. | Backbone | Method | R@20 | R@50 | R@100 | mR@20 | mR@50 | mR@100 | |-------------------------- |------------------ |------ |------ |-------|-------|-------|--------| | MS-CLIP [1] | Baseline (CLS$^\star$) | 8.2 | 15.1 | 21.5 | 7.9 | 16.4 | 22.4 | | | RECODE$^\star$ | **9.2** | **17.3** | **24.7** | **8.3** | 15.4 | **22.6** | | DECLIP [2] | Baseline (CLS$^\star$) | 11.0 | 18.3 | 24.4 | 11.0 | 19.0 | 27.1 | | | RECODE$^\star$ | **11.4** | **19.3** | **25.9** | 10.5 | **19.5** | **27.8** | We acknowledge the importance of comparing our proposed approach with the recent state-of-the-art (SOTA) models in the field. Since our methods are training-free (cf, the **global response** for clarifying experimental settings), we combined our methods with other SOTA pre-trained visual-language (VL) models, e.g., MS-CLIP [1] and DECLIP [2], which can achieve training-free zero-shot VRD. The results are reported in Table\_R 1. Notably, when combining RECODE$^\star$ with these different SOTA VL models, we also observed considerable performance gains compared to the baseline CLS$^\star$. These consistent improvements underline the effectiveness and generalizability of our RECODE. [1] Learning visual representation from modality-shared contrastive language-image pre- training. In ECCV, 2022. \ [2] Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In ICLR, 2022. --- Rebuttal 2: Comment: The authors' response answered my question. Therefore, I will keep my score for this paper. --- Rebuttal Comment 2.1: Comment: Thank you for reviewing our paper and indicating that our response addressed your questions. We are grateful for your decision to maintain your score for the paper. If you have any further questions or feedback, please do not hesitate to get in touch.
null
null
null
null
null
null
Stein $\Pi$-Importance Sampling
Accept (spotlight)
Summary: This paper presents a novel approach for constructing an MCMC target that is specifically designed for post-processing using Stein importance sampling and Stein thinning, where the goal is to assign optimal weights to a subset of sample particles in order to construct the best possible approximations of a distribution $P$. The proposed method introduces a new target distribution, denoted as $\Pi$, which is obtained by tilting the original target density $p(x)$ with the square root of a Stein kernel $k_P(x)$. This construction is derived by solving a variational problem that minimises the trace of the variance of a limiting Gaussian distribution. The use of $\Pi$ as a target for the Metropolis-Adjusted Langevin Algorithm (MALA) instead of the original target distribution $P$ is justified through an almost sure consistency guarantee (Theorem 1) and numerical experiments conducted on a set of benchmark problems. Strengths: **Originality**: Whilst previous works have extensively studied post-processing using Stein's discrepancy, this paper introduces a novel perspective on improving Stein importance sampling through the design of a target distribution that is distinct from the original target $P$ but more suitable for post-processing techniques. This novel approach of leveraging the target design to improve post-processing methods is both interesting and original. **Quality**: The construction of the proposed target distribution $\Pi$ is clearly explained (Section 3.1). The paper presents both a theoretical guarantee (Theorem 1) and extensive numerical evidence to support the choice of $\Pi$. The assumptions in Theorem 1 appear to be mild, and the authors provide comprehensive discussions comparing them with similar conditions in related literature. Overall, the significance of the proposed method is convincingly demonstrated. **Clarity**: This paper exhibits a high level of clarity throughout, including an extensive review of related literature and methodologies (Section 2). Weaknesses: **Motivation**: While the authors have effectively justified the use of the proposed target $\Pi$ instead of the original target for constructing MALA samplers, the rationale for employing S$\Pi$IS over running MALA **without** post-processing is slightly weak. Specifically: 1. A main advantage of Stein importance sampling (SIS) lies in its ability to provide unbiased estimation when the MCMC sampler used to generate the sample is biased. Consequently, allocating computational resources to post-processing is crucial in such cases, as it is uncertain whether running the chain for a longer time would improve the quality due to the bias. However, for $\Pi$ sampling, one must set up an unbiased sampler (e.g., MALA) that targets $\Pi$, raising the question of whether conducting S$\Pi$IS is more beneficial than simply allocating the same computational budget to running the chain for an extended period. The reported experimental results do not seem to address this question, as they solely compare S$\Pi$IS with standard SIS. Including experiments comparing S$\Pi$IS with MALA without post-processing would be valuable in addressing this concern. 2. S$\Pi$IS requires the setup of an MCMC sampler targeting a distribution different from $P$. Thus, the generated sample must be used in conjunction with the post-processing step to perform inference on $P$. This contrasts with standard SIS, where samples can be drawn from a sampler targeting $P$, which is typically what practitioners would do regardless of their intention to use post-processing methods. This raises the question of whether setting up a sampler that targets such a specialised distribution and relies on post-processing is more practically attractive than directly targeting $P$. Including discussions addressing these concerns would be beneficial. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: L174: Could you elaborate on what is the relaxation and what one loses due to this relaxation, if any? Figure 1.2: It seems that for small $n$, targeting $P$ is more beneficial than targeting $\Pi$. Is it purely coincidental, or does it reflect a specific characteristic or trade-off associated with the methodology? Could you offer some insights into this observation? L229: Could you explain why the co-domain of $D_P$ is $\mathcal{X} \times \mathcal{X}$ instead of $\mathcal{P}(\mathcal{X})$? Table 1: Are the results averages over multiple repetitions or obtained from a single repetition? t Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations of Stein IS and the use of different Stein kernels are addressed in Section 4 and Section 3.2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful report. > A main advantage of Stein importance sampling (SIS) lies in its ability to provide unbiased estimation when the MCMC sampler used to generate the sample is biased. However, for $\Pi$ sampling, one must set up an unbiased sampler (e.g., MALA) that targets $\Pi$ This is a good point; for example, the unadjusted Langevin algorithm (ULA) can generate approximate samples at lower computational cost compared to MALA, and Stein Importance Sampling can be used to retrospectively correct for the bias in ULA. If it were clear how to design a ULA algorithm to target $\Pi$ then this could be directly applied in Stein $\Pi$ Importance Sampling, however in general the explicit characterisation of the invariant distribution for ULA is intractable, making it unclear how to proceed. This would be an interesting direction for further research and we will add a discussion on this point to the conclusion section of the revised manuscript. > Including experiments comparing S$\Pi$IS with MALA without post-processing would be valuable in addressing this concern. Please allow us to point out that the columns labelled "MALA" in Table 1 correspond to classical $P$-invariant MALA, which we believe is what you have asked for? It is true that, then $P$ and its gradients are cheap to evaluate, the computational cost of MALA is lower than that of S$\Pi$IS-MALA, and one could run more iterations of MALA for an equivalent computational cost. But for more complex $P$ the computational cost of all algorithms will be gated by the number of times $P$ and its gradients need to be evaluated, making the direct comparison in Table 1 meaningful. Further, if we aim for a compressed representation of $P$, then some form of post-processing of MALA would be required, which would then entail an additional computational cost. We aim to discuss these important points in more detail in the revised manuscript. > This raises the question of whether setting up a sampler that targets such a specialised distribution and relies on post-processing is more practically attractive than directly targeting $P$. Including discussions addressing these concerns would be beneficial. As we also mentioned to Reviewer X2c9, at present it is unclear whether these algorithms will stand the test of time compared to MCMC, but we believe they are certainly worth investigating. We will expand the conclusion section of the manuscript to address this broader point. > L174: Could you elaborate on what is the relaxation and what one loses due to this relaxation, if any? Actually, nothing was lost due to relaxing the constraints (S1-2), since we verified in lines 177-178 that the solution to the relaxed problem also happens to satisfy the constraints (S1-2). > Figure 1.2: It seems that for small $n$, targeting $P$ is more beneficial than targeting $\Pi$. Is it purely coincidental, or does it reflect a specific characteristic or trade-off associated with the methodology? Could you offer some insights into this observation? In this particular example $P$ is uni-modal while $\Pi$ is multi-modal, and we conjecture that there is a "warm up" period where samples are needed in each of the "modes" of $\Pi$ before S$\Pi$IS starts to perform well. In contrast, SIS requires only samples from $P$, which is uni-modal, leading to a shorter "warm up" period. > L229: Could you explain why the co-domain of $D_P$ is $\mathcal{X} \times \mathcal{X}$ instead of $\mathcal{P}(\mathcal{X})$? Thank you for catching this typo! Indeed, $D_P : \mathcal{P}(\mathcal{X}) \rightarrow [0,\infty]$. > Table 1: Are the results averages over multiple repetitions or obtained from a single repetition? They are averages over ten replicates; in the caption we wrote that "ten replicates were computed", but we will make explicit that averages over the ten replicates are being reported. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, which answered all of my questions.
Summary: The paper studies the design of the MCMC algorithm which is well suited for post-processing to obtain consistent approximation $P_n^\star$ of the target measure $P$ using Stein kernel discrepancies ($D_P( \cdot)$). The authors suggest the following novel procedure: (1) choose a measure $\Pi$ which differs from the target measure $P$ by the factor $\sqrt{k_p}$, where $k_p$ is the Stein kernel (solving variational problem) (2); Sample from $\Pi$ using (pre-conditioned) MALA; (3) solve linearly-constrained quadratic problem or construct sparse approximation to obtain $P_n^\star$. Theorem 1 provides assumptions to ensure convergence of Stein kernel discrepancies $D(P_n^\star)$ to zero. Step (1) is based on SNIS procedure and the choice of $\Pi$ which gives the smallest variance of the Stein kernel discrepancies between SNIS estimate P_n and target measure $P$. The results are illustrated by numerical experiments. Strengths: -Novel variational algorithm for explicit construction of measure \Pi. -Theoretical analysis of the algorithm in the case of using MALA as a MCMC sampler Weaknesses: - Only asymptotic convergence in D_P metric is proved in Theorem 1. - Would be good to sketch ideas of construction of Stein's kernel in the main text. - It could be difficult to calculate Stein's kernel in more practical problems - Works in moderate dimensions Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It is well known that MALA sampler is not good for mixtures of distributions. Is it possible to replace MALA by some other MCMC sampler as HMC or adaptive MCMC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully considering our manuscript. > It is well known that MALA sampler is not good for mixtures of distributions. Is it possible to replace MALA by some other MCMC sampler as HMC or adaptive MCMC? This can of course be done in practice, but ensuring consistency of the resulting algorithm could be considerably more difficult. It is certainly a good suggestion, and one that we will take forward in further work. --- Rebuttal Comment 1.1: Comment: Thanks a lot! I will keep my score.
Summary: This paper proposes a proposal distribution $\Pi$ to generate finite points such that a weighted version approximates the target distribution $P$ under Stein discrepancy. Strengths: The main strength lies in a new proposal for the sampling distribution $\Pi$ that is more efficient for follow-up approximation to the original distribution $P$ in the sense of Stein discrepancy. Asymptotic consistency is established, and extensive simulations on Bayesian computation is also conducted to illustrate the benefit of the proposal. Weaknesses: The authors might consider expanding the section on the actual contribution (the proposal of $\Pi$), and be more brief on the background. For instance, the section on Wasserstein distance doesn't seem necessary. The second limitation is in the theoretical guarantees. Can one provide a non-asymptotic convergence guarantee for the proposed method? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can the authors comment more on the effect of dimensionality on the improvement of $\Pi$ over $P$? 2. Can the same proposal work for other kernel functions ( non Stein)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments on our manuscript. > the section on Wasserstein distance doesn't seem necessary Thank you, we will think carefully about how to improve all aspects of our presentation, bearing in mind that Reviewers fuoL and pfRv appreciated this part of the manuscript specifically ("the motivational section on the optimal quantization of Wasserstein distance is well-put", "a high level of clarity throughout, including an extensive review of related literature and methodologies"). > Can one provide a non-asymptotic convergence guarantee for the proposed method? We believe this is possible in principle by exploiting Theorem 2 of Riabiz et al (2022). However, this strategy would only provide a bound on $\mathbb{E}[\text{KSD}^2]$. In contrast, our manuscript contains an almost sure (albeit asymptotic) convergence result. --- Rebuttal Comment 1.1: Comment: Thanks for the response. However the questions under Question section have not been addressed. --- Reply to Comment 1.1.1: Comment: Thank you, we apologise for our oversight: > 1. Can the authors comment more on the effect of dimensionality on the improvement of $\Pi$ over $P$? Appendix D.1 is dedicated to this point: In brief, a naive kernel choice can make the diagonal $k_P(x,x)$ of the Stein kernel effectively a constant, in which case $\Pi$ becomes effectively identical to $P$ (see the difference between the Langevin and KGM kernels in Figure S1). This is not necessarily a problematic result, as it was already known that kernel choice is important for high-dimensional applications of KSD (e.g. see the discussion and guidance on kernel choice in Schrab et al, 2022). Some alternatives to KSD, such as Sliced KSD (Gong et al, ICLR 2021), have been proposed specifically for the high-dimensional context. Though these alternatives do not currently enjoy the same convergence control guarantees as KSD, it could be interesting to seek an optimal choice of $\Pi$ for these discrepancies as well. We will highlight this as a possible future research direction in the conclusion of the manuscript. > 2. Can the same proposal work for other kernel functions (non Stein)? The argument that we made for selection of $\Pi$ is not specific to a Stein kernel. You may be wondering whether such an approach could be useful for Bayesian quadrature, for example? The trouble here is that Bayesian quadrature is usually performed with a translation-invariant kernel, and for any translation-invariant kernel our $\Pi$ becomes equal to $P$. In recent work , some authors have advocated for the use of non-stationary kernels as a default in Bayesian quadrature (Fisher et al, 2020). Evaluating the potential benefit of sampling from $\Pi$ in the latter context could be an interesting avenue for further work, and we will also highlight this in the revised manuscript. We hope that these adequately address your questions, and thank you again for your report. References: Fisher MA, Oates CJ, Powell C, Teckentrup A., 2020. A Locally Adaptive Bayesian Cubature Method. International Conference on Artificial Intelligence and Statistics (AISTATS 2020). Gong, W., Li, Y. and Hernández-Lobato, J.M., 2021. Sliced Kernelized Stein Discrepancy. In International Conference on Learning Representations (ICLR 2021). Schrab, A., Guedj, B. and Gretton, A., 2022. KSD aggregated goodness-of-fit test. Advances in Neural Information Processing Systems, 35, pp.32624-32638.
Summary: The paper analyses which target distribution to use for MCMC in the situation where a stein-descrepency will be used to post-process it's output samples. They propose to use a different target distribution for the MCMC to the distribution being approximated, and show this improves performance on a variety of posterior inference problems. Strengths: - The paper identifies and clear question: how to choose the invariant distribution \pi for MCMC if using a stein-descrepency to post-process the samples. - They provide a method for selecting $\pi$ via a variational problem framing, where $\pi$ is selected to minimize the variance in the post-processed approximation. This gives a closed form expression for $\pi$ (up to a normalizing constant) such that it can be easily used within MCMC. Their analysis is agnostic to the choice of stein kernel making their results broadly applicable. - Figure 1 nicely illustrates the property of over-dispersion that their choice of $\pi$ has, and shows that on a simple 1D problem that this results in lower error bars compared to using $P$ for the MCMC. - In their experiments their proposed method is shown to consistently improve (better results in 70% of PosteriorDB tasks) upon the baseline of setting $\pi$ to the target distribution being approximated. - Their presentation is generally very clear, with informative figures provided. Weaknesses: I could not see any weaknesses in this paper, however the it's subfield is not within my area of expertise and I did not check the proofs. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How does using the stein-discrepency **during** sample generation (e.g. stein variational gradient descent) compare to using it for post-processing? I note that this question is not relevant for the contributions of the paper, but an answer would be useful for my understanding of the paper's general usefulness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors address the limitations of their method in the discussion, namely that it requires second order derivatives of the model Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind comments on our manuscript. > How does using the stein-discrepancy during sample generation (e.g. stein variational gradient descent) compare to using it for post-processing? There have been some attempts to directly address this issue, in particular Stein Points (Chen et al, ICML 2018), Stein Point MCMC (Chen et al, ICML 2019), and Kernel Stein Discrepancy Descent (Korba et al, ICML 2021). Whilst these algorithms do make use of Stein discrepancy for guiding sampling, it has to be acknowledged that these algorithms are not widely used. SVGD is more widely used, but it is a gradient flow on the KL divergence rather than on the KSD. At present it is unclear whether these algorithms will stand the test of time compared to MCMC, but we believe they are certainly worth investigating. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for pointing out this literature. I have no further questions and am happy to recommend acceptance of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes a design of a probability density that is more over-dispersed than the target density, so that, somewhat surprisingly, the resulting MCMC samples, after optimally reweighted, can achieve lower KSD than MCMC samples from the true target density. Consistency of the two proposed algorithms, SPiIS-MALA and SPiT-MALA, is proved. These two algorithms are benchmarked on PosteriorDB dataset to demonstrate their superior performance over raw MALA and SIS-MALA (MALA plus optimal reweighting). Strengths: - The paper tackles an interesting yet to my knowledge underexplored problem in sampling, which is how to design a density so that the resulting samples provide a good discrete support for the ensuing optimally reweighting step that finds a weight to minimize KSD. - The angle of the attack is, although not new (e.g. Graf and Luschgy 2007), quite surprising (i.e. the density needs to be over-dispersed). The motivational section on the optimal quantization of Wasserstein distance is well-put. - The consistency of the two proposed methods is proved which is nice. - Many kernels are studied (Langevin-Stein/KGM/Riemann-Langevin-Stein) and are used in producing the empirical results. - The writing of the paper is excellent and lots of intuition and toy examples are given to illustrate the points. Weaknesses: - The consistency proof of Theorem 1 seems like a rather straightforward application of Theorem 2 (Durmus and Moulines 2022) and Theorem 3(Riabiz et al. 2022). In particular, it seems to me the same proof should go through for quite generic $\Pi$, not necessarily the one that takes the form in (8). Hence, it is not clear whether the design (8) is theoretically justified, other than the heuristic argument given in Sec. 3.1. - There is a considerable gap between the heuristic argument in Sec. 3.1 and the proposed algorithm, which is the weights used in Sec. 3.1 is not optimal but in the algorithms they are. The authors claimed the choice $dP/d\Pi$ is "near-optimal" (without justification), but then noted that using $dP/d\Pi$ will perform substantially worse than the optimal weight, which seems contradictory to the first claim. - The experimental results comparing SIS-MALA and SPiIS-MALA are somewhat mixed. For an end user, there is no provided criterion on whether they should use the proposed method or the baseline SIS-MALA. Moreover, I cannot find standard derivation of the reported numbers. * There seems to be missing experiments that benchmark the performance of SPiT-MALA compared to baseline (e.g. $\Pi=P4). The only experiment I found for SPiT-MALA is in D.6 where the consistency is verified. - In many places it is hinted that $P$ will be close to $\Pi$ as the dimension $d$ increases. This implies that the proposed method is only applicable to small dimensions and thus the application value is limited. Further analysis on the relation to dimension could be helpful. Moreover, there is only one experiment for $d=66$ (last row in Table 1) that corroborate the point that the extent of improvement decreases when the dimension increases; more experiments could be used to strengthen this point. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I would like to see a rigorous statement and proof of the heuristic argument from Sec 3.1. The sketched-out argument makes sense (other than one detail --- see below) so I'm wondering why it is not made into a complete proof. * In the paragraph below (S2), it seems to assume that $E_{x \sim \Pi}[\frac{dP}{d\Pi}(x)(k(\cdot, x) - \mu_P)] = 0$. Why is this true? I think $E_{x \sim \Pi}[\frac{dP}{d\Pi}(x)k(\cdot, x) - \mu_P] = 0$ but not when $\mu_P$ is multiplied by the importance weight. Of course, if $\mu_P = 0$ then it does not matter. - A heuristic argument is given in D.1 for the Langevin-Stein kernel and $P$ is the standard $d$-dimensional Gaussian. Aside from this very simple $P$, is it true that $P \approx \Pi$ in general? Can we say anything theoretical about it? - How is the simplex-constrained minimization in Algorithm 2 implemented? What is the time complexity? This also seems like the computational bottleneck of Alg. 2 and the reason $n$ is ony a few thousands in all experiments. - For the experiment done in D.8, if the plot is for Wasserstein-1 distance, will $\Pi(1Wass)$ result in better numbers than $\Pi(KGM3)$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed report. > it is not clear whether the design (8) is theoretically justified, other than the heuristic argument given in Sec. 3.1 Our proposed $\Pi$ is not the only choice for which consistency can be established; consistent approximation is possible also for $\Pi = P$, for example. Rather, we heuristically motivate a specific choice of $\Pi$ that is expected to out-perform alternatives, and then we verify that consistency occurs for this specific choice of $\Pi$. > The authors claimed the choice $dP/d\Pi$ is "near-optimal" (without justification), but then noted that using $dP/d\Pi$ will perform substantially worse than the optimal weight You are also correct that there is a performance gap between the weights that we analyse in Sec 3.1 and the Stein importance sampling weights; we will replace the phrase "near optimal" with a more nuanced explanation, which acknowledges the performance gap but explains that nevertheless the weights that we analyse in Sec 3.1 are expected to perform much better than simpler choices, such as uniform weights. > For an end user, there is no provided criterion on whether they should use the proposed method or the baseline SIS-MALA. We see this as a fundamentally difficult question, akin to asking how to pick a MCMC method. In the MCMC setting, it is typical to try one algorithm and, if it is not performing well, to try a different algorithm instead. On the other hand, if we are talking in theoretical terms, then low-to-moderate dimensional posteriors that are not highly multi-modal are likely to be well-suited to S$\Pi$IS-MALA, but otherwise plain MALA (rather than SIS-MALA) is likely to perform best. We emphasise that this is due to the fundamental pathologies of KSD itself, rather than our algorithms to minimise it. A discussion will be added to the manuscript. > I cannot find standard derivation of the reported numbers. The full results, including standard error bars, are contained in supplemental Appendix D.5 - space considerations prevented us from including these in Table 1. It can be verified that the error bars are all relatively small. > There seems to be missing experiments that benchmark the performance of SPiT-MALA compared to baseline (e.g. $\Pi=P$). Thank you for the opportunity to discuss this point - we are confident that S$\Pi$T-MALA can outperform the baseline Stein Thinning algorithm with $\Pi = P$, but this will occur only at large sample sizes $n$. The reason for this is that Stein Thinning is a greedy algorithm which favours the inclusion of high probability samples in its initial phase - once the modes have been well described it will only then move on to sampling from the tail. We wanted to demonstrate this phenomenon, but it appears that in many cases we need $n > 3,000$ to see the behaviour just described. Due to the super-linear cost of S$\Pi$T-MALA, we have so far not been able to deploy sufficient computational resources to thoroughly examine this effect. This, together with the development of more efficient approximations to S$\Pi$T-MALA, are actives area of ongoing work. As such, we focused most the manuscript on S$\Pi$IS-MALA, only noting in passing that we also obtain a consistency proof for S$\Pi$T-MALA. > There is only one experiment for $d=66$ that corroborate the point that the extent of improvement decreases when the dimension increases; more experiments could be used to strengthen this point. This is a good suggestion; at the time of writing this was the highest-dimensional model in PosteriorDB that complied, but we can explore the inclusion of other high-dimensional examples in the revised manuscript. > I would like to see a rigorous statement and proof of the heuristic argument from Sec 3.1 With respect, we believe everything here is rigorously stated and proven, albeit not within a theorem environment in latex. > In the paragraph below (S2) it seems to assume that $\mathbb{E}_{x \sim \Pi} [ \frac{\mathrm{d}P}{\mathrm{d}\Pi}(x) ( k(\cdot,x) - \mu_P(\cdot) ) ] = 0$. Why is this true? The equality holds due to the following argument: $\mathbb{E}_{x \sim \Pi} [ \frac{\mathrm{d}P}{\mathrm{d}\Pi}(x) ( k(\cdot,x) - \mu_P(\cdot) ) ]$ $=\mathbb{E}_{x \sim P} [ k(\cdot,x) - \mu_P(\cdot) ]$ $=\mathbb{E}_{x \sim P} [ k(\cdot,x) ] - \mu_P(\cdot) = \mu_P(\cdot) - \mu_P(\cdot) = 0$ > Is it true that $P \approx \Pi$ in general? Since $\frac{\mathrm{d}\Pi}{\mathrm{d}P}(x) = k_P(x,x)^{1/2}$, the difference between $P$ and $\Pi$ is driven by the Stein kernel $k_P$. While $k_P(x,x)$ is usually an unbounded function as $\|x\| \rightarrow \infty$, the tail behaviour is controlled by the choice of Stein operator and base kernel. There are of course moment-type constraints on $\Pi$ for consistency of SPiIS-MALA than mean it cannot differ arbitrarily from $P$ for our theory to hold. > How is the simplex-constrained minimization in Algorithm 2 implemented? What is the time complexity? This is a linearly-constrained quadratic programme which we solved in Python 3.10.4 using the qpsolvers package version 3.4.0 as the frontend in conjunction with the ProxSuite package version 0.3.6 serving as the backend. The full details are contained in the accompanying code, and we will add an explicit mention of the packages that can be used to run Algorithm 2 into the main text. The time complexity is difficult to quantify, but we believe it is upper-bounded by $O(n^3)$. These details will be included in the revised manuscript. > For the experiment done in D.8, if the plot is for Wasserstein-1 distance, will $\Pi$ (1-Wass) result in better numbers than $\Pi$ (KGM3)? We can certainly investigate -- but Wasserstein-1 and KSD (KGM3) are quite different performance metrics, the former not capturing convergence of second and third moments, unlike the latter. Our aim in this work was limited to designing an algorithm that is able to minimise a user-specified KSD. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for the detailed response that has clarified all of my questions. I would like to keep my current score due to a few areas that can still be improved: 1. S$\Pi$T-MALA lacks empirical verification (or at least a discussion of the challenge of empirical verification as you mentioned in the rebuttal); 2. More empirical or theoretical justification to the hypothesis that the benefit of S$\Pi$IT-MALA vanishes as the dimension increases; 3. Important details should be added in the main text (or add in appendix and refer in the main text), such as the standard deviation in Table 1, the implementation of Algorithm 2 and its complexity (and whether the QP solver is exact or approximate; if latter what is the error); 4. Turn the "heuristic" argument in 3.1 into a rigorous statement and perhaps add more math background on the Hilber-space CLT result used, how to compute the trace of $\mathcal{C}$, etc.
null
null
null
null
null
null
Outlier-Robust Wasserstein DRO
Accept (poster)
Summary: This paper empowers WDRO with the ability to resist outliers, building upon the outlier-robust Wasserstein distance $W_p^\epsilon$. The excess risk of the solution to both the outlier-robust WDRO and its empirical version is given. An improved bound of the excess risk is derived for the setting of low-dimensional features. The optimization algorithm for the outlier-robust WDRO is developed with the dual form of the minimax problem. Empirical study validates the effectiveness of the proposed method with a simple regression setting involving both Wasserstein and Total Variance contaminiation. Strengths: 1. DRO is known to be over-pessimistic in the existence of outliers. Empowering Distributionally Robust Optimziation with robustness against noisy labels is important. The outlier-robust Wasserstein distance is a well-developed revision of the original Wasserstein distance to account for certain degree of contamination. Thus, constructing the uncertainty set of DRO with $W_p^\epsilon$ is both significant and reasonable. 2. The theoretical analysis of the excess risk of the proposed method is sound and comprehensive. The results quantify how the solution adapts to given geometric and TV contamination. The effectiveness of the empircial algorithm is also guaranteed by the empirical bound of excess risk. Weaknesses: 1. The effectiveness of outlier-robust WDRO is theoretically guaranteed by the excess risk bound. My main concern is over the **advantage** of outlier-robust WDRO v.s. WDRO. As is stated by the authors, the excess risk of outlier-robust WDRO is upper bounded by the $W_p$ regularizer with a larger Wasserstein radius. The authors also recognize that the same upper bound could be achieved by a radius-expanded WDRO. Two advantages of WDRO given in Remark 1 are not convincing. - The authors claim that a heavy preprocessing step is required by WDRO to estimate the exact radius. However, in practical settings beyond simulated experiments, the exact parameters of contamination levels $\rho, \epsilon$ are also latent. Either an estimate or tuning of the radius is necessary for both WDRO and the outlier-robust version. - The authors prove an improved bound of the excess risk for outlier-robust WDRO in the setting of low-dimensional features and claim that the bound of WDRO could not be improved, which is unsupported. A tight bound of the excess risk of WDRO with low-dimensional features might be given to consolidate the author's claim. Otherwise, an inequality between the excess risk of WDRO and its outlier-robust version might be given similarly to Eq.4. Furthermore, the proposed low-dimensional feature setting is somewhat impractical because the exact dimension $k$ is typically unknown in real datasets. In the experiment section, I suppose the radius for WDRO is selected to be the true contaminaton radius. The curve of performances of WDRO and outlier-robust version with increasing radius might demonstrate their gap more convincingly. 2. I am concerned if the formulation of outlier-robust WDRO (eq.3) is well defined. Consider a simple case where all the samples of $\tilde \mu$ are discretely distributed on $2/\epsilon$ points, one of which is denoted by $(x_0,y_0)$. By arbitarily modifying $y_0$ to $y'$ we get a new distribution $\nu(y')$. According to Line 131 we have $W_p^\epsilon(\tilde \mu, \nu) =0$. Thus, all the $\nu(y')$ is included in the uncertainty set of the correspondong $W_p^\epsilon$ DRO. However, the risk of a given predictor on the group of $\nu(y')$ could be unbounded since $y'$ is unconstrained. As a consequence, there might be no solution to eq.3. Intuitively, though eq.3 incorporates the clean distribution into the uncertainty set, it might also include more dirty distributions since the outlier-robust Wasserstein distance tolerates a small fraction of outliers, but the risk on these outliers could be unbounded. Therefore, I'm concerned if the porposed outlier-robust DRO would be biased towards more dirty distributions instead of recovering the clean one. 3. I am also concerned with the selection of the radius $\rho$ of outlier-robust WDRO. As is indicated by Theorem 1, larger $\rho$ leads to higher excess risk while Line 203 states that the excess risk bound only stands for $\rho \geq \rho_0 + W_p(\mu, \hat \mu_n)$, implying that $\rho$ shall not be too small. Since both $\rho_0$ and $W_p(\mu, \hat \mu_n)$ are unavailable in practical settings, the selection of the parameter seems tricky. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why doesn't the dual form of outlier-robust WDRO in Proposition 7 reduce to that of WDRO when $\epsilon \rightarrow 0$, even if neglecting $\lambda_1$? Specifically, the operator $[\cdot]_+$ claimed to be vital for outlier-robust WDRO does not disappear when outlier-robust WDRO reduce to WDRO. Some typos: - Line 158: $\nu$ v.s. $\mu$ - Supplementary Line 565: In second equality, it might be $s_i\geq r_i$ and $s_i \geq 0$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: **Comparison to WDRO with expanded radius:** We agree that this warrants further discussion. Please see **Common Response 2**. **Tight performance bound for WDRO with expanded radius with low-dimensional features:** As discussed in **Common Response 2**, the bounds $\mathsf{W}\_1(\hat{\mu},\mu) \lesssim \rho + \sqrt{d\ep}$ and $\mathsf{W}\_1(\check{\mu},\mu) \lesssim \sqrt{d}\rho + \sqrt{d\ep}$, which guarantee that the Wasserstein ball contains the true distribution, are tight in general. Moreover, we can show that the performance of standard WDRO with any arbitrary radius $\rho'$ is tightly characterized by the Wasserstein regularizer from [11] with that radius, even if the reference loss function depends only on $k$-dimensional features. Combining the above claims, if we perform standard WDRO centered around either $\hat{\mu}$ or $\check{\mu}$ and choose the radius $\rho'\in\\{\rho + \sqrt{d\ep},\sqrt{d}\rho + \sqrt{d\ep}\\}$ as above, we cannot recover the excess risk bound of Theorem 3 unless $k = \Omega(d)$. Further, we are currently working to prove a stronger lower bound that will establish a clear separation between our approach and standard WDRO with expanded radius. Namely, we seek to show that any choice of $\rho'$ around these centers (even if the corresponding Wasserstein ball does not contain $\mu$) will incur suboptimal excess risk. We will report any findings in that direction in the final version. **Parameter selection:** We agree that this is an important practical concern. Please see **Common Response 1**. **Impracticality of low-dimensional features:** Section 4 applies to the setting where the loss functions depend only on $k$-dimensional features. Importantly, this is a property of the loss family $\mathcal{L}$, not the data. Although one may seek to match this dimension with an unknown latent dimension of the data, in practice $k$ is often fixed due to tractability or interpretability concerns (e.g., with linear regression, we have $k=1$). In that sense, the parameter $k$ can be viewed as known to the learner. **Experiments with varied radius:** Thank you for this suggestion. We will add experiments to the final version which vary $\rho$ as well as $\ep$. In the worst case, selecting $\rho$ too small can lead to unbounded risk (see **Common Response 1**), though this may not occur in our simple regression environment. **Well-definedness of outlier-robust WDRO:** In the proposed example, so long as $\mathcal{A}$ is encoding meaningful moment bounds, $\nu(y')$ does not belong in the uncertainty set for arbitrary $y'$. For example, if $\mathcal{A} = \mathcal{G}\_\mathrm{cov}$, observe that the variance level $\\|\Sigma\_{\nu(y')}\\|\_{\mathrm{op}} \to \infty$ as $\\|y' - y\_0\\| \to \infty$. That is, we only have $\nu(y') \in \mathcal{G}_\mathrm{cov}$ for appropriately small perturbations. Incorporating distributional knowledge of the clean data is essential to obtain meaningful risk bounds (not only in our setting, but throughout robust statistics). **Radius selection:** Our analysis indeed relies upon $\rho$ being taken sufficiently large. By the argument described in **Parameter selection** above, knowledge of $\rho\_0$ is necessary to obtain meaningful risk bounds. As discussed in Proposition 4, $\mathsf{W}\_p(\mu,\hat{\mu}\_n)$ is bounded by $O(\sqrt{d}n^{-1/d})$ with high probability if $\mu \in \mathcal{G}\_\mathrm{cov}$, so no additional knowledge is necessary. **Recovering classic WDRO when $\ep = 0$:** As noted by Reviewer zQim, our stated dual form was actually missing an extra Lagrange multiplier (corresponding to the unit mass constraint for probability distributions). The corrected form is provided under **Unit mass constraint** in our response for that reviewer, and it indeed recovers the classic WDRO dual when $\ep = 0$ and $\sigma \to \infty$. **Typos:** Thanks for noticing these; we will fix them in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors for their response. My concern over Weakness 2,3 and Question 1 have been addressed. Notably, the authors made a major mistake in the dual reformulation of DRO's objective and the empirical implementation. Fortunately, the authors have corrected both the theoretical and empirical results. My major concern over Weakness 1 around the advantage of outlier-robust WDRO over WDRO remains. The authors provide the excess risk of vanilla WDRO formulated as $\||\ell\|| (\sqrt{d}\rho + \sqrt{d\epsilon})$, in contrast to that of outlier-robust WDRO formulated as $\||\ell\|| (\rho + \sqrt{d\epsilon})$ in Corollary 1. However, the gap between two DROs is $(\sqrt{d}-1)\rho$ which does not depend on the TV contamination level $\epsilon$. Why does outlier-robust WDRO outperform WDRO when $\epsilon=0$ and only geometric contamination exists? The result does not validate WDRO's superiority in the case of TV contamination. I believe the stronger bound of the excess risk for vanilla WDRO with any radius, which is ongoing work, might elucidate my concern. Since the paper is motivated from WDRO under TV contamination, I insist that a thorough comparison is important. I would like to keep my score for now until further results or clarification is made by the authors. --- Reply to Comment 1.1.1: Comment: When $\varepsilon = 0$, one may center standard WDRO around the observed distribution $\tilde{\mu}$ with radius $\rho$, since a Wasserstein ball of radius $\rho$ about $\tilde{\mu}$ will contain the true distribution $\mu$. In this case, the performance of outlier-robust WDRO and standard WDRO are the same. However, as soon as $\varepsilon > 0$ (no matter how small), standard WDRO on its own is no longer sufficient, because we have no bound on $\mathsf{W}_p(\tilde{\mu},\mu)$ (indeed, $\mathsf{W}_p(\mu,(1-\varepsilon)\mu + \varepsilon \delta_z) \to \infty$ as $||z|| \to \infty$). To remedy this efficiently, we proposed above to recenter standard WDRO around the efficiently computable estimate $\check{\mu}$ produced via iterative filtering. However, this estimate can only be guaranteed to satisfy $\mathsf{W}_p(\check{\mu},\mu) \leq \sqrt{d}\rho + \sqrt{d\varepsilon}$, hence the degraded risk bound. We note that this guarantee for $\check{\mu}$ is tight and novel, and in general that no approach using standard WDRO existed for our problem before this work. We will update the reviewer if we can prove the mentioned stronger lower bound w/in the discussion period. In any case, we will add this filter + standard WDRO approach as a baseline for comparison to our experiments in the final version.
Summary: It is well known that Wasserstein distances do not commute well with total variation distance - a slight perturbation in TV can change the Wasserstein distance by a lot. This means that models that are robust to corruptions in the data distribution in Wasserstein distance can still be vulnerable to outliers or corruptions in TV sense. This paper addresses this problem by considering robustness with respect to both TV and Wasserstein. The paper studies worst-case excess risk w.r.t. corruptions in a new distance termed the 'outlier robust Wasserstein distance'. The authors derive upper and lower bounds on the worst-case excess risk for families of distributions that are either sub-Gaussian or have bounded covariance. The upper bounds depend on a recently introduced term called the "resilience" of a probability measure. Intuitively, resilience of a measure quantifies the deviation in expectation of a function (in this case the loss function) when taken w.r.t. a measure in a probability ball around the original measure. The authors also propose a tractable reformulation of the min-max problem via strong duality. Finally, the authors tighten their results for the case when the true distribution lies on a low dimensional linear subspace, by extending their results to corruptions in outlier robust max-sliced Wasserstein distance. The authors also verify their excess risk bounds on a toy dataset for linear regression problem with mean absolute deviation loss. Strengths: - significance: the problem of incorporating outlier robustness into the framework of wasserstein distributionally robust optimization (WDRO) is of significance to both ML and optimization communities, and has already received some recent interest. This paper makes progress on this significant problem. - novelty: I think the upper and lower bounds on the min-max excess risk are novel. The strong duality result appears to be a generalization of Gao and Kleywegt's result on strong duality for WDRO, but I think it is is non-trivial. Weaknesses: - **Contextualizing the work within related literature**: Although the paper is overall well written, I wish it did a better job at contextualizing their results properly by comparing it with the two special cases of WDRO without TV corruption and of TV robustness without WDRO. For WDRO without TV corruption for example, a natural point of comparison is [this](https://optimization-online.org/wp-content/uploads/2016/04/5396.pdf) paper by Gao and Kleywegt. One point of comparison could be the existence and the form of the worst-case distribution in the min-max risk. This question is answered for the WDRO without TV corruption in Gao and Kleywegt's paper but this paper does not address it at all, which seems odd to me. For TV corruption without WDRO (aka Huber contamination), there are several works from the robust statistics community. - **Usefulness on top of stricter WDRO**: I think Remark 1 deserves much more discussion. From proposition 1 it is clear that the new compound model of outlier robust WDRO can be subsumed under plain vanilla DRO by increasing the budget of the Wasserstein contamination in proportion to the resilience of the true distribution. Then, how much additional utility do we gain by analyzing the min-max excess risk under the compound model in detail? I must admit I don't understand what the authors mean by the expensive pre-processing step for WDRO. Please explain this to me satisfactorily, and I am willing to change my opinion on this particular weakness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Proposition 3: Is this lower bound valid for p = 2, or only for p = 1? The loss function family is only assumed to be Lipsclitz and not $\alpha$-smooth, whereas for the upper bound, the loss is assumed to be $\alpha$-smooth for the case of p = 2. - Proposition 4 could just be a remark. - Something is broken in reference [38]. Are you sure this is the correct reference? I did not find a Proposition 3.4 in it. Also, as I have stated previously, I would very much like a closer comparison of the strong duality result with that of Gao and Kleywegt. Do both the results become identical if one of the lambda's is zero? - Do the results of section 4 still go through if the max-sliced wasserstein distance is replaced with the sliced wasserstein distance that takes expectation over all slices instead of max? - Lemma 5, which seems crucial to the proof of Theorem 1, is taken from [12], supposedly a forthcoming paper, which I am unable to access anywhere. So, I am unable to check Theorem 1 fully. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: **Contextualization w/in related work:** Thanks for raising this fair point. We will add further discussion that compares our results to those in the literature when either $\varepsilon = 0$ or $\rho = 0$. For WDRO without TV corruptions (i.e., when $\varepsilon=0$), we will add worst-case distribution results and show how both our strong dual and the said distribution reduce to those of standard WDRO as $\varepsilon\to 0$ and $\sigma\to\infty$. For the dual, after introducing a slight correction to our formulation (see **Unit mass constraint** in response to Reviewer zQim), we indeed recover the classic WDRO dual as a special case in the above limit. For the worst-case distribution, we will derive its existence and structure á la Gao and Kleywegt. Specifically, we can show under the setting of Theorem 2 that $$ \sup\_{\substack{\nu \in \cG\_2(\sigma,z\_0):\\\\ \RWp(\tilde{\mu}\_n\\|\nu) \leq \rho}} \E\_\nu[\ell] = \left\\{ \begin{array}{cll} \max & \sum\_{(i,j) \in [n]\times [J]} P\_{\ell\_j}(\xi\_{ij}, q\_{ij}) \\\\ \mathrm{s.t.} & q\_{ij} \in \R\_+, \xi\_{ij} \in \R^d & \forall i \in [n], \forall j \in [J] \\\\ & \xi\_{ij} \in q\_{ij} \cdot \mathcal Z & \forall i \in [n], \forall j \in [J] \\\\ & \sum\_{j \in [J]} q\_{ij} \leq \frac{1}{n(1-\varepsilon)} & \forall i \in [n] \\\\ & \sum\_{(i,j) \in [n]\times [J]} q\_{ij} = 1 \\ & \sum\_{(i,j) \in [n]\times [J]} P\_{\| \cdot\|^p} (\xi\_{ij} - q\_{ij} \tilde Z\_i , q\_{ij}) \leq \rho \\\\ & \sum\_{(i,j) \in [n]\times [J]} P\_{\| \cdot\|^2} (\xi\_{ij} - q\_{ij} z\_0 , q\_{ij}) \leq \sigma^2 \end{array} \right. $$ The discrete distribution $\nu^\star = \sum\_{(i,j) \in \mathcal Q} q\_{ij}^\star \delta\_{ \xi\_{ij}^\star / q\_{ij}^\star}$ achieves the worst-case expectation on the left-hand side, where $(q\_{ij}^\star, \xi\_{ij}^\star)_{(i,j) \in [n]\times [J]} $ are optimizers of the maximization problem on the right and $\mathcal Q := \\{(i,j) \in [n] \times [J] : q\_{ij}^\star > 0 \\}$. Note that we recover the classic worst-case distribution when $\ep = 0$ and $\sigma \to \infty$. Moreover, a non-constructive argument based on counting active constraints guarantees that some worst-case distribution exists with support size $n+2$. For the robust statistics setting without Wasserstein perturbations ($\rho = 0$), we will add comparison to existing work on robust supervised learning (e.g., results of [47] for robust linear regression or those of [45] with a Wasserstein radius of 0). In general, our results improve upon existing risk bounds by scaling with the complexity of the optimal hypothesis for the clean data, rather than requiring a uniform complexity bound for the hypothesis class. **Usefulness on top of stricter WDRO:** We agree that this warrants further discussion. Please see **Common Response 2**. **Proposition 3:** The lower bound is valid as stated for any $p \geq 1$ (since the worst-case Wasserstein perturbation we construct is a translation). However, as the bound is in terms of the Lipschitz constant $L$, rather than the Sobolev norm and smoothness constant, it is best viewed as proving the tightness for the $p=1$ component of Theorem 1. **Proposition 4:** We will convert this statement into a remark. **Reference 38:** Thanks for pointing this out, some reference numbers were broken in the supplement. The corrected reference should be [36] ("On duality theory of conic linear programs," A. Shapiro). **Average vs. max-sliced $\mathsf{W}\_p$ in Section 4:** An average-sliced version of $\mathsf{W}\_p$ will not suffice for the Section 4 results; our proof of Lemma 8 in Appendix D requires the Wasserstein bound to hold uniformly over all $k$-dimensional projections since we do not know the relevant map $M \in \R^{k \times d}$ in advance. **Lemma 5:** This result is from [12], which while not yet officially published in *Operations Research*, is available on the INFORMS website (with DOI 10.1287/opre.2022.2383), and on arXiv with identifier 2009.04382. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I did not notice the missing constraint in your original Lagrangian formulation, but it is good that this error is fixed and we indeed recover the pure WDRO result as $\epsilon \to 0$. I also appreciate your response to my second weakness. Regarding my comment on your response to reviewer vYVj, could you also perhaps comment on what effect does taking p in Wp to infinity have on your results? I am especially curious to see what happens to the strong duality result as $p\to\infty$. I understand this question may be non-trivial to answer, but my interest in this is stemming from the work of Pydi & Jog Neurips 2021 who showed a connection between $W_\infty$ perturbations and adversarial attacks.
Summary: This paper introduces an outlier-robust Wasserstein Distributionally Robust Optimization (DRO) framework that aims to capture both geometric uncertainties and non-geometric perturbations, such as adversarial outliers. By utilizing the outlier-robust Wasserstein distance, the proposed framework allows for the arbitrary corruption of a fraction of the data. The authors design an uncertainty set using a robust Wasserstein ball and derive minimax optimal excess risk bounds. They also establish a strong duality for efficient computation. The resulting problem involves tuning three parameters: - the bounded covariance parameter $\sigma$, - the radius of the ambiguity set $\rho$, - and the contamination parameter $\varepsilon$. Moreover, the authors address dimension dependencies in risk bounds for low-dimensional features by introducing the projection robust optimal transport. The paper concludes with experimental validation of the theory on regression and classification tasks. Strengths: - The paper replaces the Wasserstein distance with the outlier-robust Wasserstein distance to tackle the case that the observed distribution is contaminated with outliers. - The authors establish the excess risk bounds of decisions for the cases $p=1,2$. - The authors derive tractable reformulation by replacing $\mathcal{G}_{\text{cov}}$ and leveraging dual of the problem. - The low-dimensional features are considered to address the problem of dimension dependency. Weaknesses: - In Remark 3 and Appendix E, the paper briefly touches on the selection of the parameter $\varepsilon$. However, the overall discussion on parameter selection is limited. The model introduced in the paper involves several parameters, such as $\rho$, $\varepsilon$, and $\sigma$, which may not be fully independent. Therefore, a more comprehensive discussion about the tuning of these parameters is warranted. - In the experiments, the paper exclusively uses the standard Wasserstein DRO (WDRO) as the baseline for comparison. However, considering that both models aim to address the outlier challenge, it would be valuable to include DFO [A], which is another model specifically designed for handling outliers, as a reasonable baseline for comparison. - The presence of a few missing references should be addressed in order to enhance the completeness and accuracy of the paper. In Section 4, it would be beneficial to include references [B, C, D], which introduce the concepts of Wasserstein projection pursuit and projection robust Wasserstein, respectively. [A] Jiang, Nan, and Weijun Xie. "DFO: A Framework for Data-driven Decision-making with Endogenous Outliers." (2022). [B] Huang, Minhui, Shiqian Ma, and Lifeng Lai. "A riemannian block coordinate descent method for computing the projection robust wasserstein distance." *International Conference on Machine Learning*. PMLR, 2021. [C] Paty, François-Pierre, and Marco Cuturi. "Subspace robust Wasserstein distances." *International conference on machine learning*. PMLR, 2019. [D] Niles-Weed, Jonathan, and Philippe Rigollet. "Estimation of wasserstein distances in the spiked transport model." *Bernoulli* 28.4 (2022): 2663-2688. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Would it be possible to provide further elaboration on the process of determining suitable parameter values for $\rho$, $\varepsilon$, and $\sigma$? - In the proof of Proposition 7 (Appendix B.8), is the constraint $\sum_{i\in [n]}m_i = 1$ included to ensure that $\mu^\prime$ is a valid probability distribution? It appears that in the problem mentioned above l566, the constraint should be $0 \leq r_i \leq s_i$ rather than the constraint $0 \leq s_i \leq r_i$. - In lines 48-49, it appears that the term $W_p^\varepsilon$ is defined in reference to outlier-robust optimal transport, rather than the partial optimal transport as described in [D], which focuses on transporting partial mass from the source distribution to the target distribution. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Unless I missed it, I believe the authors do not expand on the limitations of their approach. It would help to add a short section discussing that. This work does not have a negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: **Parameter selection:** Thank you for this important question. Please see **Common Response 1**. We further provide a proof of our claim therein that knowledge of $\rho$ is necessary for meaningful risk bounds under adversarial Wasserstein perturbations. Lower bound (Necessity of knowing $\rho$): We construct a family of one-dimensional loss functions $\mathcal{L}$ and an observed distribution $\tilde{\mu} \in \mathcal{P}(\R)$ against which any decision $\hat{\ell} \in \mathcal{L}$ chosen as a function only of $\tilde{\mu}$ must suffer risk $$ \E_\mu[\hat{\ell}] \gg \inf\_{\ell \in \mathcal{L}} \E\_\mu[\ell] + \mathsf{W}\_1(\tilde{\mu},\mu) \\|\ell\\|\_{\mathrm{Lip}} $$ for some clean distribution $\mu \in \mathcal{P}(\R)$. Specifically, we consider the family $\mathcal{L} = \{ \ell_\theta: \theta > 0 \}$, where $$\ell_\theta(z) := \frac{z}{\theta} + \theta.$$ Assume that the learner observes $\tilde{\mu} = \delta_0$ and selects decision $\hat{\ell} = \ell_{\hat{\theta}}$ for $\hat{\theta} > 0$. If the true distribution was $\mu = \delta_\rho$, then the optimal decision would have been $\theta_\star = \sqrt{\rho}$. In this case, the learner suffers excess risk $$ \ell_{\hat{\theta}}(\rho) - \ell_{\theta_\star}(\rho) = \frac{\rho}{\hat{\theta}} + \hat{\theta} - 2\sqrt{\rho} = \left(\sqrt{\rho/\hat{\theta}} - \sqrt{\hat{\theta}}\right)^2. $$ As $\rho \to \infty$, this far exceeds the desired lower bound of $ \mathsf{W}\_1(\tilde{\mu},\mu) \cdot\\|\ell\_{\theta\_\star(\rho)}\\|\_{\mathrm{Lip}} = \rho \cdot \rho^{-1/2} = \sqrt{\rho}$. This construction fails when $\rho$ is known, since the learner may simply select $\hat{\theta} = \sqrt{\rho}$. **Comparison to [A]:** Thank you for this reference. We will include [A] in our related work and add it as a baseline for comparison. We would like to emphasize, however, that the DFO approach requires solving a non-convex optimization problem, significantly impacting its scalability. Further, this method is not accompanied by any proof of minimax optimality. **Missing references:** We will update our related work and discussion to include these citations. We note that the notion of robustness considered in these papers is distinct from ours (although we do employ such sliced distances in our analysis for Section 4). **Unit mass constraint:** We thank the reviewer for raising this important point. Our tractable dual reformulation is indeed missing a Lagrange multiplier corresponding to the constraint that probability distributions have unit mass. The corrected dual is $$ \sup\_{\nu \in \cG\_2(\sigma,z\_0):\\, \mathsf{W}\_p^\ep(\tilde{\mu}\_n\\|\nu) \leq \rho} \E\_\nu[\ell] = \inf\_{\substack{\lambda\_1, \lambda\_2 \in \R\_+ \\\\ \alpha \in \R}} \lambda\_1 \sigma^2 + \lambda_2 \rho^p + \alpha + \frac{1}{1-\ep} \E\_{\tilde{\mu}\_n} \big[\\,\overline{\ell}(\cdot\,;\lambda\_1,\lambda\_2, \alpha) \big], $$ where $\overline{\ell}(z;\lambda\_1,\lambda\_2, \alpha) := \sup\_{\xi \in \mathcal{Z}} \\,\big[\\,\ell(\xi) - \lambda\_1 \| \xi - z\_0 \|^2 - \lambda\_2 \| \xi - z \|^p - \alpha \big]\_+$. Recalling the conditional variance at risk defined by $\mathrm{CVaR}\_{1-\ep, \mu} [\ell(Z)] = \inf\_{\alpha \in \R} \alpha + \frac{1}{1 - \ep} \E\_{Z \sim \mu} \big[[\ell(Z) - \alpha]\_+\big]$, this can be restated as $$ \inf\_{\lambda\_1, \lambda\_2 \in \R\_+ } \lambda\_1 \sigma^2 + \lambda\_2 \rho^p + \mathrm{CVaR}\_{1-\ep, \tilde{\mu}\_n} \left[ \sup\_{\xi \in \mathcal{Z}} \\, \ell(\xi) - \lambda\_1 \| \xi - z\_0 \|^2 - \lambda\_2 \| \xi - Z \|^p \right]. $$ When $\ep \to 0$ and $\sigma \to \infty$, CVaR reduces to the standard expected value and the minimizing value for $\lambda_1$ is 0; we thus recover the classic WDRO dual as a special case. In the new Figure 2, we display experiments updated with this correction. Results are essentially unchanged, with the exception of reduced performance when $\ep$ is too small. Indeed, the corrected dual form invalidates Remark 3 (it relies on an insensitivity of the incorrect dual to $\ep$ that is no longer the case). We will replace this with a remark on parameter selection as described in **Common Response 1**. We are currently investigating theoretical justification for the strong performance without the constraint and will include our findings in the final version. \textbf{Outlier-robust vs. partial OT:} We note that our definition of outlier-robust OT $\RWp$ corresponds to a partial OT distance up to a constant prefactor: $$ \RWp(\mu,\nu)^p = (1-\ep)^{-1}\inf\_{\substack{\pi \in \Pi(\mu,\nu)\\\\\pi\_1 \leq \mu, \pi_2 \leq \nu\\\\\pi(\mathcal{Z} \times \mathcal{Z}) = 1-\ep}} \int \|x - y\|^p d \pi(x,y). $$ **Limitations:** Thank you for this suggestion. We will add a discussion of limitations before the conclusion. In particular, we will reiterate the required knowledge of problem parameters and discuss the practical implications of Assumption 2 (see response to Reviewer vYVj). --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for providing clarifications that address my concerns. I'm pleased to see the correct dual problem formulation. After carefully reading all the reviews and rebuttals, I have decided to revise my score from 4 to 6.
Summary: This paper introduces a novel approach to make Wasserstein distributionally robust optimization problems robust to adversarial outliers including geometric perturbations and non-geometric contamination of the data. This goal is achieved by considering relevant Wasserstein ball which includes both type of adversarial attacks. Later, the authors provides strong duality results which allows them to improve the computation complexity of the proposed outlier-robust WDRO problem. Strengths: This is a well written paper with novel contributions. The robustness of WDRO to the outliers is a very natural problem and the authors proposed a Wasserstein distance based constraints to provide robustness. This is particularly hard if we do not make any presumptions on the distribution of data; therefore, I find the contributions of this paper to be significant to the literature. There might be some concerns that I have mentioned in the weaknesses part, but, overall I am impressed by the work done here. Weaknesses: As I have mentioned at the strengths part, I believe this paper has a novel contribution to the robust optimization literature. Some few weaknesses that I observed was that authors assumed the readers have prior knowledge on relation between adverserial attacks and Wasserstein and TV perturbations. These are not so obvious for me and might be useful to include some discussion/motivation about how these attacks can be associated with claimed perturbations in metric space. I believe it would strengthen the paper if authors can obtain some results on larger datasets. Considering the time constraint of the conference, I believe it would be sufficient to add one large dataset to experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the definition for $\theta$ performing uniformly well over Wasserstein ball (stated in Line 29)? - I could not see the definition of $\Sigma_\mu$ in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The Assumption 2 can be challenging to maintain/check in practice. Moreover, this application might also be hard to implement on large datasets such as classification problems on MNIST or FMNIST. That being said, I don't see these possible limitations as a major problem considering the theoretical novelty of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: **Relation between adversarial attacks and Wasserstein/TV perturbations:** Thank you for bringing this up. We will add the new Figure 1, along with accompanying discussion, to the introduction to clarify the nature of our allowed perturbations. Intuitively, the perturbation set $\mathcal{B}_{\varepsilon,\rho}(\mu) := \\{\tilde{\mu} : \mathsf{W}_p^\varepsilon(\tilde{\mu},\mu) \leq \rho\\}$ describes the following adversary. First, the $\mathsf{W}_p$ perturbation of radius $\rho$ enables the adversary to geometrically move samples of $\mu$ around for a total $L^p$ displacement cost of $\rho$, to arrive at a new distribution $\mu'$. The $\varepsilon$-TV perturbation further enables mixing $\mu'$ with an arbitrary outlier mass $\alpha$ to generate the distribution $\tilde\mu=(1-\varepsilon)\mu'+\varepsilon\alpha$. More formally, the adversary replaces each sample $X \sim \mu$ with $X + \Delta(X)$, for some (potentially stochastic) displacement map $\Delta : \mathbb{R}^d \to \mathbb{R}^d$ such that $\mathbb{E}[\|\Delta(X)\|^p | \mathcal{E}] \leq \rho^p$, where $\mathcal{E}$ is some event with probability $1-\varepsilon$. **Results on larger datasets:** Our approach can indeed scale to more complex data sets. As a proof of concept, we compare standard and outlier-robust WDRO (implemented via Theorem 2) for binary classification between two MNIST digit categories with 10\% of training labels flipped, see the new Figure 3. Training linear classifiers with hinge loss on 10, 20, 50, and 100 training digits (each naively reduced to 50 dimensions), we found that our approach consistently outperformed standard WDRO in classification accuracy. For the final version, we will extend this test to larger dimensions and training set sizes, and include the results in our Section 5. **Uniform performance over Wasserstein ball:** On Line 29, the sentence "$\hat{\theta} \in \Theta$ performs uniformly well over the Wasserstein ball" was intended to be a qualitative description of Eq. 1. Indeed, the minimizer $\hat{\theta}$ is selected to incur low risk $\mathbb{E}_{Z \sim \nu}[\ell(\theta,Z)]$ w.r.t. any distribution $\nu$ lying within Wasserstein distance $\rho$ of the observed data distribution $\tilde{\mu}$. We will adjust the phrasing to emphasize that we are optimizing for such a uniform risk bound, rather than suggesting that the performance of $\hat{\theta}$ is similar for all such $\nu$ (assuming this was the cause for confusion). **Definition of $\Sigma_\mu$:** $\Sigma_\mu \in \R^{d \times d}$ denotes the covariance matrix of a probability measure $\mu \in \mathcal{P}(\R^d)$. We will include this definition in the final version. **Strength of Assumption 2:** Generally, any continuous function can be approximated arbitrarily well by a maximum of finitely many concave functions. However, the number of needed functions can be arbitrarily large, which raises efficiency concerns in practice. For instance, the $\ell\_1$-norm $\|z\|\_1 = \max_{\sigma \in \{\pm 1\}^d} \sum\_{i=1}^d \sigma\_i z\_i$ requires $2^d$ concave functions, whereas the $\ell\_\infty$-norm $\|z\|\_\infty = \max_{i \in [d], \sigma \in \{\pm 1\}} \sigma z\_i$ requires only $2d$. We will add a discussion of this limitation to the final version. The question of how to efficiently perform outlier-robust WDRO for loss functions requiring $\exp(d)$ concave pieces is an interesting avenue for future research. In such cases, it may be appropriate to apply gradient methods directly to our dual form (Eq. 6). --- Rebuttal Comment 1.1: Comment: I have read authors rebuttal and I believe their arguments are justified. I thank for authors for their work and would like to keep my rating as it is. --- Reply to Comment 1.1.1: Comment: Thank you for your kind response. We were wondering if there are any other additions to the text that the reviewer would like to see that could further improve their assessment of the work? --- Rebuttal Comment 1.2: Title: There is a more precise relation between adversarial attacks and Wasserstein perturbations. Comment: Upon reading the review of vYVj and your response, I would like to point out that there is a more precise relation between adversarial attacks and Wasserstein perturbations besides the intuition provided in your response. Reference [*] establishes the equivalence between robustness against adversarial attacks with perturbation radius $\epsilon$ and distributional robustness in a ball of radius $\epsilon$ w.r.t. $W_\infty$ metric. I wonder if your strong duality result holds as $p\to \infty$. [*] Pydi, M. S., & Jog, V. (2021). The many faces of adversarial risk. Advances in Neural Information Processing Systems, 34, 10000-10012. --- Reply to Comment 1.2.1: Comment: Indeed, there is an equivalence between point-wise adversarial attacks and $\mathsf{W}\_\infty$ perturbations for standard WDRO. Fix observed data $\tilde{\mu}\_n = \frac{1}{n} \sum\_{i=1}^n \delta\_{\tilde{z}\_i}$, and write $\mathcal{B}\_\infty := \\{ \nu \in \mathcal{P}(\mathcal{Z}) : \mathsf{W}\_\infty(\nu,\tilde{\mu}\_n) \leq \rho \\}$. By Lemma EC2 of [A], $\mathcal{B}\_\infty$ admits the equivalent representation $$ \mathcal{B}\_\infty = \left\\{\ \frac{1}{n}\sum\_{i=1}^n \delta\_{z\_i} : \\|z\_i - \tilde{z}\_i\\| \leq \rho, z_i \in \mathcal{Z} \right\\}, $$ and we have $$ \sup\_{\nu \in \mathcal{B}\_\infty} \mathbb{E}\_\nu[\ell(Z)] = \mathbb{E}\_{\tilde{\mu}\_n}[\bar{\ell}(Z)], $$ where $\bar{\ell}(z) := \sup\_{z' \in \mathcal{Z}, \\|z' - z\\| \leq \rho} \ell(z')$. Our theory also extends naturally to this $p \to \infty$ limit. For the robust Wasserstein ball $\mathcal{B}\_\infty^\varepsilon := \\{ \nu \in \mathcal{Z} : \mathsf{W}\_\infty^\varepsilon(\nu,\tilde{\mu}\_n) \leq \rho \\}$, we can similarly prove $$ \sup\_{\nu \in \mathcal{B}\_\infty^\varepsilon} \mathbb{E}\_\nu[\ell(Z)] = \mathrm{CVaR}\_{\tilde{\mu}\_n}^{1-\varepsilon}[\bar{\ell}(Z)], $$ where $\mathrm{CVaR}$ is the conditional variance at risk appearing in our corrected dual. Enforcing our moment constraints, we can prove $$ \sup\_{\nu \in \mathcal{B}\_\infty^\varepsilon \cap \mathcal{G}\_2(\sigma,z\_0)} \mathbb{E}\_\nu[\ell(Z)] = \inf\_{\lambda \geq 0} \lambda \sigma^2 + \mathrm{CVaR}\_{\tilde{\mu}\_n}^{1-\varepsilon}[\bar{\ell}\_2(Z)], $$ where $\bar{\ell}\_2(z) := \sup\_{z' \in \mathcal{Z} : \\|z' - z\\| \leq \rho} \ell(z') - \lambda\\|z' - z\_0\\|^2$, and $$ \sup\_{\nu \in \mathcal{B}\_\infty^\varepsilon \cap \mathcal{G}\_\mathrm{cov}(\sigma,z\_0)} \mathbb{E}\_\nu[\ell(Z)] = \inf\_{\Lambda \succeq 0} z\_0^\top \Lambda z\_0 + \sigma^2 \mathrm{Tr}(\Lambda) + \mathrm{CVaR}\_{\tilde{\mu}\_n}^{1-\varepsilon}[\bar{\ell}\_\mathrm{cov}(Z)], $$ where $\bar{\ell}\_\mathrm{cov}(z) := \sup\_{z' \in \mathcal{Z} : \\|z' - z\\| \leq \rho} \ell(z') - z'^\top \Lambda z'$. Note that both cases recover the standard dual as $\varepsilon \to 0$ and $\sigma \to \infty$. [A] : Rui Gao, Xi Chen, and Anton J. Kleywegt. Wasserstein Distributionally Robust Optimization and Variation Regularization. Operations Research, 2022, to be published. doi:10.1287/opre.2022.2383
Rebuttal 1: Rebuttal: $\newcommand{\cG}{\mathcal{G}}\newcommand{\ep}{\varepsilon}\newcommand{\RWp}{\mathsf{W}_p^\ep}\newcommand{\E}{\mathbb{E}}\newcommand{\R}{\mathbb{R}}\newcommand{\sg}{\sigma}\newcommand{\mr}{\mathrm}$**Common Response:** We thank the reviewers for their time and feedback. Below we provide a common response to shared concerns. **1. Parameter selection:** Selection of $\rho$, $\sg$, and $\ep$ appearing in Eq. 8 and its tractable dual reformulation is a key practical consideration. First, we observe that knowledge of valid upper bounds on these parameters is sufficient to attain excess risk bounds scaling in terms of said upper bounds. This approach avoids meticulously tuning the parameters but may result in suboptimal risk. Below, we discuss methods for parameter selection that attain optimal minimax risk. First, we note that knowledge of $\rho$ is necessary to attain meaningful risk bounds against adversarial Wasserstein perturbations, even for standard WDRO (i.e., $\ep = 0$, $\sg \to \infty$). See the beginning of our response to Reviewer zQim for a proof of this claim. In the popular setting where $\rho$ models only sampling error (i.e., $\rho_0 = 0$ in the setting of Section 3.2), without adversarial perturbations, knowledge of one of $\sg$ or $\ep$ is enough to tune $\rho$. If $\sg$ is known, then one may select $\rho = O(\sg n^{-1/d})$, as discussed in Proposition 4 (where $\sg \leq \sqrt{d}$ by the covariance bound). If $\sg$ is unknown but $\ep$ is known, we can show that the bootstrapped estimate $\hat{\rho} = 2\mathsf{W}\_p^{2\ep}(\tilde{\mu}\_{S}, \tilde{\mu}\_{[n] \setminus S})$ gives near optimal guarantees, where $S$ is a uniformly random $n/2$-sized subset of $[n]$. Next, we address tuning $\sg$ and $\ep$, assuming henceforth that $\rho$ is known or was tuned as described above, based on knowledge of one of them. In general, knowing at least one of $\sg$ or $\ep$ is necessary to obtain non-trivial risk bounds, even in the easier problem without Wasserstein perturbations (i.e. $\rho = 0$). Otherwise, it is information theoretically impossible to meaningfully distinguish inliers from outliers (see Exercise 1.7b of "Algorithmic High-Dimensional Robust Statistics" by Diakonikolas and Kane, 2022, for a discussion of this issue in the setting of robust mean estimation). The first step of our approach is to obtain an accurate robust mean estimate, which we discuss next. If $\ep$ is known but $\sg$ is unknown, then the mean estimation algorithms we employ are fully specified and we may proceed with outlier-robust WDRO. In the opposite case, when $\sg$ is known but $\ep$ is unknown, we employ a halving trick, dividing a guess $\hat{\ep}$ by two until an appropriate $\hat{\ep}$-trimmed variance of the observed data is $O(\sg)$. Having an accurate robust mean estimate, to run the main robust WDRO procedure, we can learn the missing parameter via an analogous halving trick. Namely, we use a binary search to set the missing parameter as small as possible, such that the primal problem (Eq. 5) is feasible (corresponding to the tractable dual (Eq. 6) being bounded). The number of search steps scales logarithmically with the ratio of the initial guess to the true value. We will add a remark and a Supplement section to expand on the above in the final version. **2. Comparison to WDRO w/ expanded radius:** We agree that the discussion in Remark 1 should be significantly expanded, which we will do in the revision to account for the points below. As mentioned in the remark, the main issue with running vanilla WDRO with an expanded radius is that the Wasserstein ball must be centered around the output of a complicated minimum distance estimation (MDE) procedure, namely $\hat{\mu} = \mr{argmin}_{\nu \in \cG} \mathsf{W}_1^\ep(\tilde{\mu} \| \nu)$. While [28] provides a statistical analysis for this estimate, the procedure they propose for finite-sample computation is a heuristic one based on Wasserstein GANs, which lacks formal guarantees. An efficient alternative to the MDE-based approach is possible for the class $\cG\_\mr{cov}$ using the popular iterative filtering method [8], but the resulting (sharp) risk bounds are suboptimal. Specifically, we can prove that iterative filtering returns a distribution $\check{\mu}$ with $\mathsf{W}\_1(\check{\mu},\mu) \lesssim \sqrt{d}\rho + \sqrt{d\ep}$ (and that this guarantee is tight). Performing vanilla WDRO around $\check{\mu}$ with radius $\sqrt{d}\rho + \sqrt{d\ep}$ yields excess risk $\\|\ell\\|\_\mr{Lip} (\sqrt{d}\rho + \sqrt{d\ep})$. This bound is suboptimal in dependence on $\rho$ by a $\sqrt{d}$ factor (compare to our Corollary 1), which becomes prohibitive as dimension grows. We are working to prove a stronger lower bound, showing that vanilla WDRO around $\check{\mu}$ with any radius leads to suboptimal excess risk, and will report any findings in the final version. Even if we ignore the tractability of finding $\hat{\mu}$, we are unaware of any theory which would allow matching our risk bounds from Section 4 for $k$-dimensional features using standard WDRO. Although the MDE estimate satisfies the stronger (and also tight) approximation guarantee $\mathsf{W}\_1(\hat{\mu},\mu) \lesssim \rho + \sqrt{d\ep}$, running standard WDRO with this radius leads to the same risk bounds from Corollary 3, but with $\sqrt{d\ep}$ instead of $\sqrt{k\ep}$. Again, as $k\ll d$, this results in a significant worsening of the bound. Lastly, although somewhat subjective, our approach can be viewed as more holistic and interpretable than WDRO with expanded radius. The outlier-robust Wasserstein distance is tailored to account for the considered adversarial model, the strong duality clearly reveals the effect of the different parameters, and the characterization of worst-case distribution (see **Contextualization w/in related work** in response to Reviewer ZiUC) further elucidates the problem structure. Pdf: /pdf/ca200aec851bf7b8ec044340ef96e75ade26ada5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Consistent Aggregation of Objectives with Diverse Time Preferences Requires Non-Markovian Rewards
Accept (poster)
Summary: The paper considers a general multi-objective sequential decision-making setting where each objective may use a different discount factor. Using an axiomatic approach, the authors prove that under some axioms (vNM axioms + dynamic consistency for the relation on each objective), an aggregated preference relation cannot simultaneously satisfy the vNM axioms, dynamic consistency, Pareto indifference, and some technical conditions. In addition, the authors discuss some ways out to this impossibility result, notably via state augmentation or relaxing dynamic consistency. Strengths: Impossibility result, although the result is actually not very surprising when different discount factors are allowed Proposition and discussion about different solutions to this impossibility result Weaknesses: The presentation and organization of the paper could be improved. Notably: The results could be presented in a more accessible way to a more general audience. The authors seems to know well the related literature in decision theory and economics, which may not necessarily be the case for the NeurIPS audience. For instance, the second and third paragraphs of Section 5.1 are quite hard to follow for a non-expert. Section 4.2 should be checked. Some notations (e.g., h_{:-1}, y_i, or y) are not explained properly or are not used in a rigorous way. It is not clear to me why Section 5 combines a presentation of other solutions to the impossibility theorem and a discussion of related work. As far as I know, most work in multiobjective reinforcement learning applies an identical discount factor on all the objectives. Since the impossibiity theorem doesn't apply in this case, the results of this paper don't apply to most such work. Therefore, I believe researchers may be mislead by the title of this paper. I suggest the authors to use a more precise one. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: Not applicable, this is a theoretical paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful commentary. Please see the General Response above re: most points. Otherwise, the reason Section 5 is somewhat of a hybrid is because (1) the parts about intertemporal choice and stochastic preference necessarily introduce "related work" that wouldn't fit, or would be duplicative in a standalone related work section, and (2) having a standalone related work section for just reinforcement learning work seems odd. We're not sure how to improve this, but welcome any suggestions. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the rebuttal. I believe it has mostly addressed the issues I raised. For now, I will keep my score unchanged.
Summary: The authors analyze the implications of preference aggregation within a Markov Decision Process Framework. They show that it is not possible to ensure dynamic consistency in an aggregated MDP if one also wants to be able to accommodate arbitrary preference criteria, even if the criteria are individually dynamically consistent. They further show that by relaxing the Markov condition incrementally, dynamic consistency can be recovered. Strengths: Addresses a fundamental representational issue in MDPs. The authors clearly have a deep understanding of the temporal consistency literature in economics and decision theory, and bring it to bear here. The technical exposition is clear, and the reasoning is sound. The example of procrastination is instructive. The construction of a patch to deal with different discount rates is perhaps the most directly useful contribution. There is also some interesting extended discussion about intemporal preferences that could be quite relevant in an AI context. Weaknesses: The authors attempt to motivate the contribution in the context of the current "Reward is Enough" debate in RL. This is tantalizing, but seems to me a bit forced. Really we have a basic technical question in representation of intertemporal preferences, and the paper underlines a common lesson that aggregating across separate preferences is never as straightforward as one might think. The technical points are perhaps connected to some arguments debate participants have brought up (and they should indeed be better versed in intertemporal choice), but ultimately we face the same questions we always do about how much and what kind of state should we incorporate to keep things approximately enough Markovian. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Technical Comments C1. The whole concept of a discount factor in MDPs is to represent a global time preference. Why should anyone have thought it could be coherent to have different time preferences on different criteria? To the extent that discounting is really capturing dynamics in the world (e.g., consequences of work persisting), then arguably that would more properly be represented in the state space to begin with. That is, using the discount factor for this is a hack and one should not be surprised it is fragile. C2. Axiom 3 is really strong. It basically entails strong independence of criteria--what is needed to get an additive representation. Many readers will not realize that. Moreover, I suspect it is much stronger than needed to get your key impossibility result. That is, even forms with many more interaction terms will probably run into problems with temporal consistency. C3. On reflection, it seems to me that the key technical point here could be restated as saying that given Axiom 3, the *only* thing that can go wrong is mismatch of discount factors. Do you agree with that characterization? Quibbles Q1. About the procrastination story. Line 67 says that the policy must remember it had previously chosen "play" in order to work forever. Not true (it actually does not matter what was executed in the first step): it just needs to know it is beyond the first step. Q2. Title. "Multi-objective agency" is a very ill-defined term, so hard to buy the assertion it requires anything in particular. What is actually proved here is about what is required for dynamically consistent preference aggregation. Q3. Exposition had a few minor gaps, for example \cal{P} never formally defined (clear from context, though). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper adequately discusses assumptions and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and helpful commentary/questions. > The authors attempt to motivate the contribution in the context of the current "Reward is Enough" debate in RL. This is tantalizing, but seems to me a bit forced. … We think it is relevant for the following reason: - Suppose we have decided “how much and what kind of state” keeps things “approximately enough Markovian” for Alice and Bob, who have different preferences. Now suppose Alice and Bob purchase an LLM-based personal assistant Carl, who inherits (i.e. aggregates) their preferences. *The same state that was sufficiently Markovian for Alice and Bob may not be sufficiently Markovian for Carl.* If Carl now becomes a principal (e.g. it delegates some work to household robot Dave), we may need to again adjust our definition of "sufficiently Markovian" for Dave. And so on. --- **C1:** Please see general response. To the extent that different humans can have different time preferences, then agents serving multiple humans will face this problem. We think discounting captures more than just the “dynamics in the world” (and can actually be considered entirely apart from dynamics, as is the case in work that analyzes discounting of consumption streams (e.g. Koopmans 1960/Diamond 1965, etc.)). From a representational standpoint, we agree that the state space can be used to absorb everything, but such a representation may give up the compression advantages of Markovian rewards/discounts. **C2:** “I suspect it is much stronger than needed to get your key impossibility result.” <- this is a really interesting idea, which we will consider (at least for future work). That being said, while the consequence of Axiom 3 (together with VNM) is strong, we do not think the Axiom itself is strong: we think it is difficult to come up with a reasonable case where society (the agent) should prefer A to B if all members of society (the principals) are indifferent between A and B. Variations of the pareto axiom are commonly assumed to be desirable (e.g. for Arrow’s Impossibility Theorem and other work on social welfare functions, and the works cited in Subsection 5.1). **C3:** Yes. Mismatch of discount factors is the only way Theorem 4 takes effect. That said, per our general response, we argue that mismatch of time preference is the general case whenever there are multiple principals. **Q1:** Good point, thank you. This could definitely be repaired by a more complex example, but TBD if we can maintain the simplicity while keeping the original claim. We will either correct the claim or tweak the example slightly. **Q2-Q3:** See general response. --- Rebuttal 2: Comment: I have read the author rebuttal. I particularly appreciate the authors' willingness to change the title. My overall evaluation of the paper is unchanged. A lot of the response appeals to an interpretation of criteria as multiple agents. Under that interpretation, we should be even less surprised that aggregation breaks things. Not just the Markov structure, but even the possibility of having a single reward function that maintains desirable properties of the collective.
Summary: This paper examines multi-objective reinforcement learning---the setting in which multiple distinct objectives are desired, and often combined, to form a composite objective. Concretely, the paper explores the limits of aggregating different objectives by appealing to three main pools of ideas. First, to the von Neumann-Morgenstern expected utility, axioms; Second, to Pareto indifference; and Third, to dynamic consistency. The main result is an impossibility result, illustrating that objectives with different time-based objectives (that is, different discount factors), cannot be aggregated in a way that yields a Markovian reward function even if the individual objectives themselves are Markovian. The paper then explores what lies beyond this impossibility result, exploring a mechanism for expanding the state-space to collapse the non-Markovian aggregated objective down to a Markovian one. This results in a "historical" discount factor, a hindsight view of the discount factor. Strengths: **STRENGTHS** The paper possesses many strengths: 1) The aspirations of the work are ambitious and important. Clearly establishing when certain kinds of objectives can and cannot be captured is important. 2) The work is rigorous, and well-connected to classical results in decision theory. 3) The examples are clear and help to communicate the main ideas. 4) The impossibility result on its own is interesting. Once I understood the details, it is not ultimately surprising, but I do not believe the result needs to be surprising to be useful. 5) Historical discounting is a new and interesting idea. Weaknesses: **WEAKNESSES** At the same time, the paper has several weaknesses: 1) Language. First, and my biggest critique, the work commonly makes use of unusual and vague language surrounding some of the main concepts. For instance, the title, and central idea of the work---multi-objective agency---is not well-defined, and by my reading is not an appropriate choice of description for the content of the work. I would recommend moving away from "agency" as a term, and certainly "multi-objective agency", as neither are well defined in this paper. Instead, I would suggest using "multi-critieria objectives", or "multi-objective RL", as is used in prior literature. It is much more clear and precise, and more well connected with the work. 2) Clarity, and detail of exposition. Ideas are often introduced abruptly and not explained in much detail. For instance, the axioms in section 3.1 are simply stated without any added context or explanation. While vNM is quite common, dynamic consistency is less so, and likely deserves a more thorough, careful, and simple explanation, given its central role in the work. Similarly, after some theorems are introduced, they are sometimes not discussed. Other details are often left unexplained, such as "...none of which is a mixture of the other two" in Theorem 4 (the main result). From a quick reading it is possible to understand this, but it would be worthwhile to spell this out carefully. 3) Notation is sometimes defined quite precisely, but it is often overly complex or not defined. For instance, in Theorem 5, it is unclear what {$\succ_\Sigma^{sas'}$} is intended to mean. Conventions tend to deviate quite a lot from typical work in reinforcement learning as well (getting rid of the Q function in favor of two uses of V, using $\Pi$ instead of $\pi$). 4) Unsurprising results. Lastly, I do believe most of the results are unsurprising. When discount factors vary across objectives, it is perhaps expected that their aggregation will not be representable in the same form (and that we can augment the state space to remedy this). Still, it is useful to make these arguments carefully and rigorously. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Primary recommendation: My strongest recommendation is to move Section 4 to the Appendix, and to use the remaining space to provide additional clarity and exposition around key ideas around results. I believe all of the Axioms need more careful and simpler introductions and discussions (and especially Axiom 3), as well as the main results. I believe this switch will strengthen the paper considerably. Main Comments/Questions: - "Our main contribution is an impossibility result from which one concludes that non-Markovian rewards are likely necessary for RL agents that pursue multiple objectives or serve multiple principals". This seems slightly too strong, by my reading of the main result---it really only applies when the discounts across the objectives differ. Is this the role that "likely" is playing in this statement? If so I might suggest adjusting the language to be more precise. - Separately, I wonder if the paper can comment on when the discount should be associated with an _agent_ rather than an _objective_ in isolation. - Two pieces of related work come to mind. First, the expressivity of "multi-dimensional" reward was explored by Miura in 2022: On the Expressivity of Multidimensional Markov Reward at the RLDM workshop on RL as Agency. I do wonder about the connections between these two sets of results. Second, Tasse et al. propose an algebra on tasks in RL in "A boolean task algebra for reinforcement learning". I wonder about how this style of composition bears on the findings and perspectives from the present paper. Small Questions: - "While this allows us to design agents that implement these policies, it doesn’t quite solve the intertemporal choice problem": what is meant by this statement? Actually designing agents that implement these policies does not seem to be in the purview of this paper. Instead, we can aggregate objectives in a way that allows us to _incentivize_ agents according to an appropriate objective, but this is different from the agent design process. - I do not understand Axiom 3 as stated and the explanation of the notation below is too brief to elucidate. Given the importance of the Axiom, I encourage expanding the explanation. Typos and writing suggestions: - As mentioned above, I think the phrase "multi-objective agency" is not the appropriate way to describe the main concept of the paper. I suggest replacing "multi-objective agency" with "multi-criteria objectives", or "multiple objectives", or some variant thereof. - I don't believe "(history dependent)" is needed in the abstract following "non-Markovian" - It looks as thought reward functions are defined as both $R$ (in the definition of the MDP) and $r$ (in, say, Theorem 2). I would encourage picking one use throughout. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and actionable suggestions. Please see the general response for the most important points. Otherwise: **Primary Rec:** Regarding your suggestion to move Section 4 to the Appendix, we assume you are referring to just the “Relaxing Markov Preference” section (up to Subsection 4.1). We think perhaps Subsection 5.3 is an alternative candidate for the Appendix. In any case, we will do our best to improve the exposition per your suggestion. **Comment 1:** That was indeed the role that “likely” was meant to play (see ** in Global discount factor in General Response); we will adjust. **Comment 2:** Our bias is toward viewing discount factors in the economic sense, as representations of “time preference”, which arises naturally when considering preferences over temporal processes / trajectories / streams of consumption. This view would make the discount factor a property of “preferences”, which could either be expressed globally, as the preferences of a complex agent (e.g. a household robot), or individually with respect to a particular objective (e.g. the paperclip making agent). We are not sure if this is responsive to your comment (we see agent and objective as being entangled), but it is in contrast to a rather common view in RL that discounting is either a mathematical convenience, or done for purposes of optimization or regularization. We will consider adding some commentary on this in Subsection 5.4 (Related work in RL). **Comment 3:** - *Miura 2023*: Thank you, we were not aware of this work. Based on our read, Miura shows that any "set of acceptable policies" (as previously defined in Abel et al. 2021) can be represented as the solution set to a constraint-based multi-objective problem (i.e., all acceptable policies, and only acceptable policies, perform better than some lower bound with respect to all objectives). This type of constraint-based composition therefore reduces to a *set* of (equally preferred) policies, whereas the scalarization-based composition in our work reduces to a single social reward function (preference ranking). The fact that Markovian scalarization is insufficient to represent a "set of acceptable policies" was shown in Abel et al. 2021. - *Tasse et al. 2020*: In the Boolean Task Algebra, Tasse et al. consider a particular family of "goal achievement" tasks that make it possible to precisely compose tasks using boolean operations (min/max). Like our work, it is a scalarization-based approach to value function composition. Although it appears as non-linear scalarization (via min/max), we think it can be understood as taking the extreme of the linearly scalarized soft-value function composition approach taken by Haarnoja et al. 2018 and Van Niekerk et al 2019, which makes it related to the discussion in our Subsection 5.3. Unlike our work, it is restricted to a particular family of goal achievement tasks; furthermore they assume their tasks are terminating/undiscounted, so our results would not apply. **Question 1:** You are correct, thank you. We will adjust. **Question 2:** As applied to voting / social choice, Axiom 3 says that if all members in a society (indexed by $\mathcal{I}$) are indifferent between two (lotteries of) alternatives ($\tilde p$ and $\tilde q$), then so too is society. We will improve the exposition here. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I thank the authors for their thorough response to each of the reviews. The authors comments and responses to my questions have helped. A few follow up comments: - On the "primary recommendation", I did intend to suggest to move all of Section 4 to the appendix, and to remove emphasis on the point about escaping impossibility from the paper. It is of course up to you, but my suggestion would be to focus purely on the main idea of the paper, which is the impossibility surrounding aggregating non-Markov objectives with different time-preferences. This would give you more time and space to really flesh out the intuition of some of the axioms and the main result. - Comments 1 and 2: that makes sense, thanks. - Comment 3: Thanks, that helps. I believe some commentary about the differences to these two works could strengthen the paper, simply because they are other known ways of composing tasks that do work (at least in the case of the task algebra).
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their time and detailed reviews. We find the reviewers have understood our work and have provided helpful suggestions. We are acting on several of the suggestions, as noted below, and agree the changes will improve the paper. We welcome any additional feedback and questions during the discussion period. General responses / planned changes: - **Title / “Multi-Objective Agency” (WDhN, j2th, qGsM):** Upon reflection we agree this was not a good choice, and will change the title. We are considering alternatives that move away from the word “agency” and are explicit about “diverse time preference”, along the lines of: - Aggregating Objectives with Diverse Time Horizons Requires Non-Markovian Rewards - Non-Markovian Aggregation of Objectives with Diverse Time Horizons - Where Markovian Scalarization Fails: Aggregating Objectives with Diverse Time Preferences We will make similar tweaks throughout to improve precision. - **Global discount factor vs per objective discounting (WDhN, j2th in C1):** If an “Actor” possesses a single global discount function, and there are two such Actors (e.g. Mom and Dad), jointly acting as principals for a third Actor that we will call the “Agent” (e.g. a household Robot), then the Agent has inherited objectives/rewards from its principals that may** have conflicting time preference. (**actually, we believe this to be the general case, not merely a rare occurrence; see, e.g., Frederick, Loewenstein, and O’Donoghue. 2002, Section 8 Paragraph 2, “Thus, there is no reason to expect that discount rates should be consistent across different choices”). - **Unsurprising main result (WDhN, qGsM):** A number of recent works have recognized or used (Markovian) transition dependent discounting (e.g. Bowling et al. 2023), and the initial question that led to this work was actually, “*How might such a transition dependent discount factor naturally arise?*”. Our initial hypothesis was that it might arise through aggregation of objectives with different fixed-discount factors (i.e., our initial hypothesis was a possibility). So while we agree that the impossibility of linearly aggregating MDPs with different discounts into an MDP with a fixed discount and the same state space is not surprising (as recognized by, e.g., Singh & Cohn 1998), and we understand why others might view our result as unsurprising, we were, in fact, surprised by the impossibility. - **Exposition (WDhN, qGsM):** Per your suggestion, we will do our best to make room for some additional exposition around the Axioms/Theorems to make it more accessible to a NeurIPS audience, while maintaining the current content and structure. This may involve moving certain bits to the Appendix, per Reviewer WDhN’s suggestion. - **Notation (WDhN, j2th, qGsM):** We will double check usage for precision / rigor / consistency and add a notation glossary to the Appendix.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms
Accept (poster)
Summary: The paper introduces a new algorithm for multi-armed bandit problems, leveraging the FTRL framework and the flexible $\beta$-Tsallis entropy family of regularizers, where $\beta \in [0,1]$. This algorithm firstly uses a new learning rate schedule to offer best-of-both-worlds guarantees for a wide range of regularization parameters. Secondly, it eliminates the requirement of the uniqueness assumption of the optimal arm. Thirdly, it improves the stochastic bounds for Shannon entropy and Log-barrier regularization. Strengths: - The paper introduces new elegant learning rates for $\beta$-Tsallis entropy, providing a best-of-both-worlds guarantee. - It generalizes the regret analysis approach by Ito (2021) by removing the uniqueness assumption of the optimal arm. - As $\beta$-Tsallis entropy is an important regularization in bandit algorithm, the results could be useful in other settings too. Weaknesses: - The results presented in the paper for the plain multi-armed bandit, in my view, lack interest and significance compared to the algorithmic and analysis novelties. Specifically, there is no improvement in terms of regret bounds in the plain multi-armed bandit problem, as FTRL with $1/2$-Tsallis regularizer (1/2-Tsallis-INF algorithm) already achieves the optimal bound in both adversarial and stochastic regimes, and the uniqueness assumption has already been addressed by Ito (2021). The only potential value lies in applying the same ideas to other settings, such as the decoupled exploration and exploitation problem explored, as discussed in the paper. - The bounds in intermediate regimes between stochastic and adversarial, where $C \neq 0$ seem to be suboptimal as they do not interpolate well between the optimal bounds of the two regime. Read question 2 and 3 for further clarification of this issue. - There are few undefined notations used in the analysis. For instance, Equation (6) introduces the notation $D_U$ without providing a clear definition, and the same issue applies to $\phi_U(x)$ and $\phi_V(x)$ in Equation (7). If the used notation for this part were consistent with Ito (2021), $\phi_U(x)$ must accept $x$ from $\mathbb{R}^{|U|}$ but in Equation (7) it takes $x \in \mathbb{R}^{K}$. This inconsistency requires revision for clarity and accuracy. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1- As stated in the paper, the regularizer always includes a small amount of extra log-barrier. Typically, the use of log-barrier introduces an additional multiplicative factor of $\sqrt{\log T}$ in the adversarial bound. However, your adversarial bound in Theorem 3.3 does not reflect this when $\beta = 1/2$. Can you elaborate on this? 2- Regarding the results on regime so called adversarial regime with a self-bounding constraint, In lines 183 and 184, you claim that your bound smoothly interpolates between $\log T$ and $\sqrt{T \log T}$ as $C$ ranges from $0$ to $T$. However, the presence of $D$ in your bound raises doubts about this smooth interpolation since $D$ can be as large as $C$, and $C$ can be on the order of $\mathcal{O}(T)$. This essentially shows that your bound has no robustness to corruption. Could you address this inconsistency? 3- Following up on the previous question, there is an improvement in the analysis of the $1/2$-Tsallis-INF algorithm by Masoudian and Seldin (2021), which demonstrates smooth interpolation between the two optimal bounds, $\mathcal{O}(\sum_{i \neq i^*} \frac1{\Delta_i}\log T)$ and $\mathcal{O}(\sqrt{KT})$ as $C$ increases. Is there any way to achieve a similar improvement for your algorithm? If not, discuss the challenges involved in obtaining the same improvement for intermediate regimes? 4- In the contribution section, for log-barrier regularization, the authors claim to have removed the uniqueness assumption of the optimal arm utilized in Ito (2021) and improved the stochastic bound. However, it should be noted that the algorithm proposed by Ito does not align completely with the algorithm presented in this paper, as Ito employs optimistic follow the regularized leader with different learning rates. Please provide further clarification. References: - Shinji Ito, Parameter-Free Multi-Armed Bandit Algorithms with Hybrid Data-Dependent Regret Bounds, COLT 2021 - Saeed Masoudian and Yevgeny Seldin, Improved analysis of the tsallis-inf algorithm in stochastically constrained adversarial bandits and stochastic bandits with adversarial corruptions, COLT 2021 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback. Please see our responses below: *** **Q1:** There are few undefined notations used in the analysis. For instance, Equation (6) introduces the notation $D_U$ without providing a clear definition, and the same issue applies to $\phi_U(x)$ and $\phi_V(x)$ in Equation (7). If the used notation for this part were consistent with [Ito, 2021], $\phi_U(x)$ must accept $x$ from $\mathbb{R}^{|U|}$ but in Equation (7) it takes $x \in \mathbb{R}^K$ . This inconsistency requires revision for clarity and accuracy. **A1:** The definitions of $D_\mathcal{I}$ and $\phi_\mathcal{I}$ for any subset $\mathcal{I} \subseteq [K]$ are provided in lines 253-254, and they indeed take $x \in \mathbb{R}^K$ as input. For your convenience, we also copy the definitions here: For any subset $\mathcal{I} \subseteq [K]$, we define $D^{s,t}_I(x,y) = \phi_I (x) -\phi_I^t (y) - \langle \nabla \phi^t_I(y), x-y \rangle$, where $\phi^t_{\mathcal{I}} (x)= -C_{log}\sum_{i\in\mathcal{I}} \log x_i - \frac{1}{1-\beta} \sum_{i \in \mathcal{I}} \gamma^{t}_i x_i^{\beta}$ (that is, $\phi^t$ restricted to $\mathcal{I}$). We will certainly improve the readability in the future version. *** **Q2:** As stated in the paper, the regularizer always includes a small amount of extra log-barrier. Typically, the use of log-barrier introduces an additional multiplicative factor of $\sqrt{\log T}$ in the adversarial bound. However, your adversarial bound in Theorem 3.3 does not reflect this when $\beta=1/2$. Can you elaborate on this? **A2:** In fact, a small (here, small means constant) amount of extra log-barrier usually only introduces an additive $K\log T$ term to the regret (which we omit in the bound for simplicity), but not a multiplicative factor of $\sqrt{\log T}$. The multiplicative factor of $\sqrt{\log T}$ in our bound for $\beta \neq 1/2$ comes from the specific learning rate that we propose, and this factor does not show up for $\beta = 1/2$ because in that case we simply use the arm-independent learning rate $\gamma_i^t = \Theta(\sqrt{t})$ of [31]. *** **Q3:** Regarding the results on regime so called adversarial regime with a self-bounding constraint, In lines 183 and 184, you claim that your bound smoothly interpolates between $\log T$ and $\sqrt{T\log T}$ as $C$ ranges from $0$ to $T$. However, the presence of $D$ in your bound raises doubts about this smooth interpolation since $D$ can be as large as $C$, and can be on the order of $O(T)$. This essentially shows that your bound has no robustness to corruption. Could you address this inconsistency? **A3:** You are right, and we will revise our statement. However, note that the same $D$ dependence also appears in [Ito, 2021]. Whether this can be removed in the absence of the uniqueness assumption is indeed an important question. *** **Q4:** Following up on the previous question, there is an improvement in the analysis of the $1/2$-Tsallis-INF algorithm by Masoudian and Seldin (2021), which demonstrates smooth interpolation between the two optimal bounds, $O(\sum_{i\neq i^\star}\log T/\Delta_i)$ and $\sqrt{KT}$ as $C$ increases. Is there any way to achieve a similar improvement for your algorithm? If not, discuss the challenges involved in obtaining the same improvement for intermediate regimes? **A4:** We do not think the technique in Masoudian and Seldin (2021) is helpful here, since this issue only shows up when we remove the uniqueness assumption (while Masoudian and Seldin still make this assumption). Specifically, recall that to use the self-bounding constraint, we typically rewrite the regret as $\text{Reg}^T = (1+\lambda)\text{Reg}^T-\lambda \text{Reg}^T$ for any $\lambda>0$. The term $(1+\lambda)D$ then shows up when decomposing the regret in the first term if we do not have uniqueness. *** **Q5:** In the contribution section, for log-barrier regularization, the authors claim to have removed the uniqueness assumption of the optimal arm utilized in [Ito, 2021] and improved the stochastic bound. However, it should be noted that the algorithm proposed by Ito does not align completely with the algorithm presented in this paper, as Ito employs optimistic follow the regularized leader with different learning rates. Please provide further clarification. **A5:** We will clarify this in the final version. In fact, if one removes the hint vector in [Ito, 2021] (which is used to obtain data-dependent bounds) and set $\nu^t_i=p^t_i$ in his algorithm, our techniques can also remove the uniqueness assumption for this algorithm. *** [Ito, 2021] Shinji Ito. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Proceedings of Thirty Fourth Conference on Learning Theory, 2021. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my questions. I don't have further questions and will keep my score as is.
Summary: The authors focus on best-of-both-worlds (BOBW) algorithms based on follow-the-regularized-leader (FTRL) in multi-armed bandits. The theoretical guarantees for most existing FTRL-based BOBW algorithms were based on the assumption that the best arm is unique in order to take advantage of self-bounding techniques. It is known that this assumption can be removed by the paper in [13], but its analysis was only applicable to the case of Tsallis-INF (FTRL with 1/2-Tsallis entropy), one of the most representative BOBW algorithms. Extending the analysis of [13], the authors show that a BOBW guarantee can be obtained with FTRL with more general regularizers, i.e., negative Shannon entropy, log-barrier, and FTRL with $\beta$-Tsallis entropy, without the assumption of an unique optimal arm. Furthermore, by using the new theory, the authors improve the regret upper bound in the stochastic regime in the decoupled setting. Strengths: - The paper is very well organized and well written. - The paper greatly advances the theory of [13], excluding the unique optimal arm assumption for a wide range of typical regularizers. The assumption have been employed for constructing BOBW algorithms with FTRL, and this is an interesting and important technical contribution to the community. - In addition, the authors affirmatively answer the question of whether it is possible to achieve BOBW without knowing $\Delta_{\min}$ when $\beta$-Tsallis entropy with $\beta \neq 1/2$ (which was unresolved in Zimmert and Seldin [31]), and the question of whether it is possible to achieve BOBW with Shannon entropy using ${\Delta_{\min}}$ to $\Delta_i$-wise in the stochastic setting (unresolved in Ito et al. [14]). The both of contributions are interesting and important. The related points are listed in weakness. Weaknesses: - There do not appear to be any major weaknesses in this paper. - One weakness would be a discussion of whether removing the assumption of unique optimal arm actually improves or worsens the performance of algorithms (The reviewer expects the algorithm becomes more conservative, and the performance becomes worse.) - Since the discussion excluding the assumption of unique optimal arm is cumbersome on its own, it would be desirable to have a discussion of which techniques in the paper contributed to resolving the problems. More specifically, which components of the algorithm play an important role in resolving the problems of [31] and [14], which are mentioned in the above Strengths part? In addition, if we accept the assumption of unique optimal arm, can we achieve improvements in [31] and [14] with a much more similar argument? Minor issues and typos: - line 87: Sepcifically -> Specifically Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Shannon entropy relies on $(\log T)^2$ rather than $\log T$, which is the case for all published algorithms using FTRL with Shannon entropy, but do the authors think it is possible to make it to $\log T$? - In addition to the above, the reviewer expect the authors to answer the questions listed in Weaknesses. - Along with them, the reviewer hopes that the author will address the questions that have been pointed out in the "Weaknesses" section above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *** **Q1:** One weakness would be a discussion of whether removing the assumption of unique optimal arm actually improves or worsens the performance of algorithms (The reviewer expects the algorithm becomes more conservative, and the performance becomes worse.) **A1:** We in fact do not believe this would be the case. For example, Ito [2021] removes the uniqueness requirement without changing the algorithm at all (i.e., this is merely an improvement in the analysis). While our results unfortunately require making slight algorithmic modifications (such as adding a log-barrier regularizer) and also suffer an extra $\frac{|U|\log T}{\Delta_{\min}}$ term in the regret, we believe that these are all artifacts of our analysis. *** **Q2(a):** Since the discussion excluding the assumption of unique optimal arm is cumbersome on its own, it would be desirable to have a discussion of which techniques in the paper contributed to resolving the problems. More specifically, which components of the algorithm play an important role in resolving the problems of [31] and [14], which are mentioned in the above Strengths part? **A2(a):** Thanks for your suggestion, and we will add such a discussion to the final version. From an algorithmic perspective, our novel learning rate schedule plays the most important role. On the other hand, the novelty of our analysis has been highlighted in Sec 4, starting L263 in particular. *** **Q2(b):** In addition, if we accept the assumption of unique optimal arm, can we achieve improvements in [31] and [14] with a much more similar argument? **A2(b):** We do not think we can ``improve'' their results under the uniqueness assumption (especially for [31] since it is already optimal), but for the standard MAB problem considered in [31], we can at least recover their result, and the analysis indeed become much simpler. On the other hand, it is currently unclear to us whether the same holds for the case with graph feedback that is considered in [14]. *** **Q3:** Minor issues and typos: line 87, Sepcifically -> Specifically. **A3:** Thank you for spotting the typo. It will be fixed in the final version. *** **Q4:** Shannon entropy relies on $(\log T)^2$ rather than $\log T$, which is the case for all published algorithms using FTRL with Shannon entropy, but do the authors think it is possible to make it to $\log T$? **A4:** It is unclear to us at this point whether $\log T$ is achievable, but we point out that it is possible to improve $(\log T)^2$ to $\log T \log K$ in the MAB setting as shown in [Dann et al., 2023]. *** [Ito, 2021] Shinji Ito. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Proceedings of Thirty Fourth Conference on Learning Theory, 2021. [Dann et al., 2023] Chris Dann, Chen-Yu Wei, Julian Zimmert, A blackbox approach to best of both worlds in bandits and beyond. In Proceedings of Thirty Sixth Conference on Learning Theory, 2023. --- Rebuttal Comment 1.1: Title: Response confirmed Comment: Thank you for your response. All questions have been answered. The rating will remain the same.
Summary: This paper studies the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). The authors show that the uniqueness assumption is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, their regret bounds also improve upon prior results even when uniqueness holds. Strengths: 1. The considered problem, i.e., best-of-both-world for multi-armed bandit, is important in the bandit literature. 2. The theoretical analysis looks sound, and the improvement is significant. 3. This paper is well-written and clearly organized. Weaknesses: 1. This paper does not provide any experimental result. It would improve the paper if the authors could conduct empirical evaluation for their algorithms and compare to existing BOBW algorithms, to validate their theoretical results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *** **Q1:** This paper does not provide any experimental result. It would improve the paper if the authors could conduct empirical evaluation for their algorithms and compare to existing BOBW algorithms, to validate their theoretical results. **A1:** We thank the reviewer for this suggestion. As in most previous work along this line (e.g. the closest one by Ito [2021]), our work focuses only on the theoretical sides. We do think empirical evaluations would be interesting and plan to do so in the future though. *** [Ito, 2021] Shinji Ito. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Proceedings of Thirty Fourth Conference on Learning Theory, 2021. --- Rebuttal Comment 1.1: Title: Thank the authors for their response Comment: Thank the authors for their response. This paper can be improved by including experiments. I tend to keep my score.
Summary: This paper considers the problem of proving best of both worlds guarantees for algorithms based on the FTRL framework for the multi-armed bandits problem. While it has been demonstrated in (Zimmert and Seldin (2019,2021)) that Tsallis-INF (FTRL with the $1/2$-Tsallis entropy regularizer) achieves optimal regret in both the adversarial and stochastic settings simultaneously, their analysis for the stochastic case relied on the assumption that the optimal arm is unique. The more recent work of Ito (2021) showed that Tsallis-INF still enjoys $\\log T$ regret in the stochastic case even if the optimal arm is not unique. In this paper, the authors generalize the analysis of Ito (2021) to other regularizers. Namely, they prove, without the uniqueness assumption, best-of-both-worlds guarantees for FTRL with any $\\beta$-Tsallis regularizer (including the log barrier and the Shannon entropy regularizers) using a new arm-dependent learning rate, albeit all regularizers are mixed with the log barrier for technical reasons. Strengths: - This work provides best of both worlds guarantees without the unique optimal arm assumption for FTRL with a broad family of regularizers. While the use of the $1/2$-Tsallis regularizer is the optimal choice (and already analyzed by Ito (2021)) for the standard bandits problem, other choices are still useful in closely related problems as illustrated in the decoupled exploration and exploitation problem. - Moreover, this work seems to be the first to provide BOBW guarantees (without requiring prior knowledge of the suboptimality gaps) for the $\\beta$-Tsallis regularizer when $\\beta$ is not $1/2$. - Overall, the paper is well written and the presentation is clear. A concise sketch of the analysis technique is provided in the last section, and the proofs seem mostly well written and easy to follow. Weaknesses: - Unlike Ito(2021), the provided bounds include an added term of order $|U| \\log(T) / \\Delta_\\min$ where $U$ is the set of optimal arms and $\\Delta_\\min$ is the smallest sub-optimality gap. Thus, the bounds are negatively affected when there are many optimal arms. While this is still an improvement in cases where prior works only achieved a $K \\log(T) / \\Delta_\\min$ dependence (as in the Shannon entropy case), in other cases (most notably for the $1/2$-Tsallis regularizer analyzed by Ito (2021) without the uniqueness assumption) the provided results are inferior to prior works. - The fact that all regularizers are summed with a log barrier term is a little unsatisfactory. For instance, in the Shannon entropy case, we potentially lose the appealing property of having closed form expressions for the predictions of FTRL. - Though probably curable with a doubling trick, the fact that the proposed approach sometimes requires prior knowledge of the time horizon is a minor weakness. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - It seems that the log barrier was not added to the Tsallis regularizer in the decoupled exploration and exploitation problem, was it to avoid the $K \\log T$ term? Why did the analysis go through in this case but not in the standard bandits problem? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors did address some of the limitations of their work. Notably the fact that their analysis requires adding a log barrier term to all the considered regularizers. The authors also acknowledged, though a little less explicitly, the extraneous dependence of their bounds on the number of optimal arms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable feedback. Please see our responses below: *** **Q1:** Unlike Ito[2021], the provided bounds include an added term of order $\frac{|U|\log T}{\Delta_{\min}}$ where $U$ is the set of optimal arms and $\Delta_{\min}$ is the smallest sub-optimality gap. Thus, the bounds are negatively affected when there are many optimal arms. While this is still an improvement in cases where prior works only achieved a $\frac{K\log T}{\Delta_{\min}}$ dependence (as in the Shannon entropy case), in other cases (most notably for the $1/2$-Tsallis regularizer analyzed by Ito [2021] without the uniqueness assumption) the provided results are inferior to prior works. **A1:** We agree that our bounds are negatively affected when there are many optimal arms due to an extra $\frac{|U|\log T}{\Delta_{\min}}$ term. However, it is worth noting that when using $1/2$-Tsallis entropy regularizer, our result in fact can recover those of Ito \[2021\](that is, without paying this extra $\frac{|U|\log T}{\Delta_{\min}}$ term). We did not write this down explicitly only because we wanted to unify all cases in a concise way, but this can be verified by bounding Eq. (12) using the argument immediately below that equation, instead of the more complicated one we provide that handles general $\beta$. Therefore, our result is only more general than previous ones, and importantly is the only one that achieves the best of both worlds guarantee without the uniqueness assumption for other regularizers. *** **Q2:** The fact that all regularizers are summed with a log barrier term is a little unsatisfactory. For instance, in the Shannon entropy case, we potentially lose the appealing property of having closed form expressions for the predictions of FTRL. **A2:** Indeed, the extra log barrier is not ideal. Removing it is an interesting but also challenging direction, which we plan to work on in the future. *** **Q3:** Though probably curable with a doubling trick, the fact that the proposed approach sometimes requires prior knowledge of the time horizon is a minor weakness. **A3:** Indeed, this is curable via a doubling trick (without hurting any of our regret bounds), as already mentioned in Section 5.3 of Ito [2021]. More specifically, we divide all the rounds $1, 2, \ldots$ into segments $\\{C_k\\}_{k=1}^{\infty}$ where $C_k=\\{S_k+1,S_k+2,\ldots,S_k+T_k\\}$ with $T_k=2^{2^k}$ and $S_k=\sum_1^{k-1}T_h$, and run a new instance of our algorithm parameterized with $T_k$ for the rounds in $C_k$. The analysis is then similar to Ito [2021]. *** **Q4:** It seems that the log barrier was not added to the Tsallis regularizer in the decoupled exploration and exploitation problem, was it to avoid the $K\log T$ term? Why did the analysis go through in this case but not in the standard bandits problem? **A4:** This is because unlike the MAB problem, an arm-independent learning rate is used in this case (note that $\alpha$ is set to $1/2$ and thus $\gamma_i^t$ is simply $\theta\sqrt{t}$). With such an arm-independent learning rate, we can show the desired stability without the help of an extra log-barrier regularizer. As you mentioned, this avoids paying the $K\log T$ term. *** [Ito, 2021] Shinji Ito. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Proceedings of Thirty Fourth Conference on Learning Theory, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My opinion remains generally the same: though the obtained bounds do not feature the ideal dependence on the suboptimality gaps, this work still offers a solid contribution towards a better understanding of the BOBW performance of FTRL-based algorithms, both in lifting the uniqueness assumption and providing improved bounds for some regularizers (with the impact of these contributions partly hinging upon their applicability beyond the standard bandits problem, as illustrated in one case by the authors for the DEE problem).
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
High-dimensional Asymptotics of Denoising Autoencoders
Accept (spotlight)
Summary: This paper studies the performance of denoising autoencoders (DAEs) in high-dimensional settings. The DAEs are trained on data sampled from a Gaussian mixture with $K$ components perturbed by isotropic Gaussian noise. The DAEs are one-layer networks with arbitrary activation functions, tied weights, and a skip connection. They are trained with an $L_2$-regularized loss function, and the high-dimensional limit is considered, where the ratio of the number of training points to the input dimension converges to a constant α. The main results of the paper are formulas for the denoising test mean squared error (MSE) for the full DAE, as well as two simpler architectures: a "bottleneck network" in which the skip connection is removed, and a "scalar linear network" that simply rescales the input by a scalar. The final formulas involve complicated integrals and optimization problems, but they only depend on low-dimensional quantities, such as the number of neurons in the hidden layer and the number of components in the Gaussian mixture. The authors use the derived asymptotic test error to analyze the role and importance of each component in the performance of DAEs. They find that the skip connection is essential for good performance and that DAEs can outperform other denoising methods, such as principal component analysis (PCA). Taking the noiseless limit, where the variance of the noise converges to zero, they can also derive the test error of reconstruction autoencoders, which are trained with noiseless inputs. Strengths: The authors managed to derive fairly complicated asymptotic formulas via the replica method from statistical physics. The empirical experiments seem to confirm the accuracy of their predictions. This provides yet another example of the successful application of the replica method and its surprisingly powerful nature. This paper fills a gap in the literature and complements previous results that mainly focused on RAEs and the infinite data regime (optimization of the population loss). The exact formulas derived in this paper are utilized to derive interesting non-trivial results, which can serve as stimuli for future research. Specifically: a) The MSE obtained by training a DAE with gradient descent matches the asymptotic predictions, suggesting that while non-convex, the optimization problem has a favorable landscape. b) The exact formulas obtained for the Gaussian mixture predictions align with those obtained for a DAE trained on MNIST datasets, indicating the presence of universality phenomena. c) As α increases, the denoising MSE of the DAE approaches the asymptotic performance of the oracle denoiser (derived in the appendix). Weaknesses: The paper exhibits weaknesses similar to other papers in this field. The exact analytical formulas derived are somewhat obscure, and it seems that several of the observations made in the paper can be deduced directly from the results of the empirical experiments, without relying on the analytical formulas (e.g. the improved performance of the network with skip connections compared to the one without can be easily inferred). The authors should emphasize the results that their analytical formulas allow to obtain. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It appears that all the experiments were conducted with a number of neurons $p = 1$. Could you please clarify if the empirical results align with the replica predictions when $p > 1$? Additionally, does the "universality phenomenon" manifest itself also in this case? - Based on Figure 3, it appears that the performance of the DAE approaches that of the oracle denoiser. Can this convergence be rigorously derived from equation (13)? - What happens when $K = 1$? Do the DAE, PCA, and bottleneck network exhibit similar performance in this scenario? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This theoretical paper does not possess any immediate potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We answer their questions below: >The authors should emphasize the results that their analytical formulas allow to obtain. We believe a strength of our analysis lies in the fact that it allows to characterize learning metrics at the global optima of the empirical risk, whereas there is a priori no guarantee that a purely experimental study can consistently reach them. Furthermore, our analytical formulae allow to probe the asymptotic behaviour of the learning problem, while a purely empirical approach would require to disentangle the finite size effects from the true asymptotic behaviour. Finally, the expressions further make it possible to establish the rates of convergence with the sample complexity of the learning metrics to their infinite data (population) $\alpha\to \infty$ limit. We shall include this analysis in the final version of the manuscript. For enhanced clarity, we will also add this discussion on the insights afforded by our analytical formulae. >It appears that all the experiments were conducted with a number of neurons $p=1$ . Could you please clarify if the empirical results align with the replica predictions when $p>1$. Additionally, does the "universality phenomenon" manifest itself also in this case? While we state the asymptotic characterization in full generality, we indeed restrict the discussion of results to $p=1$ hidden units. This already presents a number of interesting and novel properties that we describe. Solving the equations for $p\ge 2$ requires more analytical work, along the lines of e.g in [29]. We further anticipate that a richer phenomenology, such as a specialization transition like [29], will arise for $p\ge2$, and a thorough exploration thereof warrants a full separate line of work that we are undertaking currently, and falls out of the scope of this first work where we introduce the model, framework and present the interesting observations for $p=1$. >Based on Figure 3, it appears that the performance of the DAE approaches that of the oracle denoiser. Can this convergence be rigorously derived from equation (13)? Thank you for this question. While the oracle denoiser (B4) admits the functional form of a DAE, the corresponding encoder and decoder weights are proportional, but not strictly equal, and therefore cannot be realized exactly by the weight-tied DAE -- although the difference of performance is quantitatively small, see Fig. 3. In the final version of the manuscript, we shall provide a characterization of the $\alpha\to\infty$ limit of the DAE performance, and characterize precisely the difference with the oracle performance. >What happens when $K=1$? Do the DAE, PCA, and bottleneck network exhibit similar performance in this scenario? The $K=1$ corresponds to setting $||\mu|| =0$ in our analytical characterization (Result 3.3). We shall include a discussion of this special case in the supplementary material of the final manuscript. In short, the oracle denoiser (B4) reduces for $K=1$ to a simple rescaling, so only the rescaling component of the DAE is actually needed. Indeed, as discussed in section 4, l.278-290, the role of the bottleneck component is to learn the data structure (as given by $\mu$), which it leverages to improve the denoising performance. In the unstructured $K=1$ ($\mu=0$) case, this component is not needed, and its presence actually causes the DAE to overfit the data, leading it to perform worse than the rescaling. Similarly, PCA denoising performs worse on unstructured data and leads to a mse worse by $\Theta(d)$, like in the $K>1$ case. In Fig. 2 of the attached pdf, we reproduce Fig.1 (left) and Fig. 3 (b) for $K=1$, comparing the oracle, rescaling, DAE, bottleneck and PCA denoisers. This figure will be included in the supplementary material of the final manuscript. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their thorough answers to my questions and for running additional experiments. These have clarified my doubts and I have accordingly increased my score. I look forward to reading the final version of the paper!
Summary: The authors consider a two layer weight-tied denoising autoencoder with a skip connection in the regime of vanishing rate and samples proportional to the dimension. They heuristically derive an exact characterization of the optimal network parameters and the corresponding network performance in the the high-dimensional limit using the replica method. Their experiments show that their results match on synthetic data and are very close to the performance on more practical data. Strengths: - Exact characterization of all quantities of interest in the high-dimensional limit. - Strong experimental validation of theoretical claims Weaknesses: - From a theoretical perspective there is a lack of rigor to this method Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why do the limits in Line 152 exist? - In Figure 3 for high noise and low sample complexity the performance of the full DAE is worse than simply rescaling. Since the full network can also act as a rescaling network by setting $w=0$, this implies that during training the global optimum is not found. Can you elaborate on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - It should be stated more clearly that the way the replica method is carried out only serves as a strong heuristic and not a rigorous proof. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. We answer below their questions: >It should be stated more clearly that the way the replica method is carried out only serves as a strong heuristic and not a rigorous proof. The reviewer is entirely correct about the heuristic nature of the result, and we will further emphasize this fact after the statement of result 3.1 in the final version of the manuscript. >Why do the limits in Line 152 exist? This is used as an assumption in the replica computation, which has been verified in related settings in previous works, see e.g. [47]. We will state the main assumptions of the method, i.e. concentration of overlaps and the uniqueness of the replica limit, explicitly. >In Figure 3 for high noise and low sample complexity the performance of the full DAE is worse than simply rescaling. Since the full network can also act as a rescaling network by setting $w=0$ , this implies that during training the global optimum is not found. Can you elaborate on this? Thank you for the question. While the full DAE can indeed act as a simple rescaling, it has access to a limited amount $n$ of training samples, and at the global optimum of the *empirical* loss, $w\ne 0$. For large noise levels $\Delta$, the DAE overfits the training data, leading to a performance worse than simple rescaling as the reviewer correctly observes in Fig. 1 and 3. Note that as the sample complexity $\alpha$ increases, this overfitting disappears, as the empirical loss becomes closer to the population loss (see Fig. 3 (a)). The fact that our study captures this overfitting phenomenon is a strength of our analysis, which allows to cover the effect of a finite amount of training data. Finally, note that the overfitting can be mitigated by adjusting the strength $\lambda$ of the regularization. In Fig. 1 of the attached pdf, we reproduce the same curve as Fig.1, for various $\lambda$. Observe that the overfitting disappears for larger regularization --e.g.$\lambda=0.8$ (instead of $\lambda=0.1$ in Fig.1)--, at the expense of worsened performance for small noise levels $\Delta$. We shall include this figure and additional discussion in the revised version of the manuscript. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clear answers and the added plot which has cleared up my doubts. In accordance I slightly increased the evaluation.
Summary: This paper presents theoretical results of the test error of 2-layer denoising auto-encoder. In a high-dimensional limit regime where data distribution follows from Mixture of Gaussian, closed-form expressions of the error are obtained. The results are further analyzed and supported by numerical experiments on real data sets. Strengths: - The main result (result 3.3) is highly non-trivial and provides a tight formula for the test error of the 2-layer denoising auto-encoder (DAE). - It shows that DAE can perform much better than PCA in terms of MSE error. - The role and importance of the skip connection is also highlighted in both theory and in practice, as well as the non-linearity in DAE. Weaknesses: - The optimal MSE error mse_o grows with the data dimension d (eq. 9), however the gap between mse_f and mse_o (eq 8) remains a constant, this suggests that the difference between DAE and PCA is not so significant in terms of the relative error of MSE, i.e. | mse_f - mse_o | / mse_o vs. | mse_PCA - mse_o | / mse_o. Thus it is still not very clear whether DAE brings something essentially different to PCA or not (I agree that at least they are different). - In the main result 3.3, it is not clear how the regularization term g is used, i.e. under Assumption 3.2, the parameter lambda does not appear in eq. (8), this is a bit strange to me . Technical Quality: 3 good Clarity: 3 good Questions for Authors: - (line 33) Usually we say the sample complexity is related to n, it is here a bit strange to say it is alpha - (line 89) What does this mean sigma(.) stays order 1 ? - (eq 13) is there a unique solution in this system of 10 equations of 10 variables ? - (line 219) regarding the Gaussian universality, do you need to perform any normalization on the data for this to hold ? The dimension of MNIST seems quite small, what is it in your mind for the universality to hold ? - (line 255) to clarify a previous point regarding PCA, does mse_f grows linearly with d ? Do you have an idea of what is the relative error of MSE of DAE and PCA ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The main result is very specific, but this is quite normal in the literature . Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful questions, which we answer below: >The optimal MSE error $mse_o$ grows with the data dimension d (eq. 9), however the gap between $mse_f$ and $mse_o$ (eq 8) remains a constant, this suggests that the difference between DAE and PCA is not so significant in terms of the relative error of MSE, i.e. $| mse_f - mse_o | / mse_o$ vs. $| mse_PCA - mse_o | / mse_o$. Thus it is still not very clear whether DAE brings something essentially different to PCA or not (I agree that at least they are different).[...]to clarify a previous point regarding PCA, does $mse_f$ grows linearly with d ? Do you have an idea of what is the relative error of MSE of DAE and PCA ? The difference between the mse achieved by the DAE $mse_{f}$ and the mse achieved by PCA is in fact significant and of order $\Theta(d)$. As we explain in l.255, $| mse_{PCA} - mse_\circ|=\Theta(d)$ and $| mse_{PCA} - mse_f|=\Theta(d)$ , so both relative errors $| mse_{PCA} - mse_\circ|/mse_\circ$. and $| mse_{PCA} - mse_f|/mse_f$ are $\Theta(1)$. Therefore, both the DAE $\hat{f}$ and the rescaling $\hat{r}$ are considerably better than PCA. On the other hand, $|mse_{f} - mse_\circ|=\Theta(1)$, which hence implies $| mse_{f} - mse_\circ|/ mse_\circ=\Theta(1/d)$. Note that this also means the improvement of the full DAE $\hat{f}$ upon the rescaling MSE $mse_\circ$ is subleading. It is in a sense a limitation of our theoretical model that the interesting phenomenology appears in the subleading order. At the same time, our real-data experiments show that the effects we predict from the theory are visible in real data, and correspond to visually significant changes (see images in Fig. 2 (left) and Fig. 4). >In the main result 3.3, it is not clear how the regularization term g is used, i.e. under Assumption 3.2, the parameter lambda does not appear in eq. (8), this is a bit strange to me . Equation (8) involves the summary statistics $m,q,V, m_k,q_k, V_k$, which in turn depend on $\lambda$ through (13). We agree that a comment on this implicit dependence will improve the readability of the manuscript, and shall include one in the final version. >(line 33) Usually we say the sample complexity is related to n, it is here a bit strange to say it is alpha The denomination of sample complexity for $\alpha$ (which is equal to the number of samples $n$ normalized by the input dimension $d$) has been consistently used in the exact high-dimensional asymptotics literature. We will comment on it to avoid possible confusion for readers used to a different terminology. Since the two parameters are straightforwardly related by a factor $\frac{1}{d}$, they essentially describe the same quantity, with $\alpha$ presenting the advantage of being $ \Theta(1)$ in the asymptotic limit considered. For the sake of clarity, we choose to keep this denomination, but will include an additional comment in l.130 when $\alpha$ is introduced. >(line 89) What does this mean sigma(.) stays order 1 ? What we mean is that the argument of the function $\sigma$ in (2), namely $\frac{w\tilde{x}}{\sqrt{d}}$, is of order $1$ as $d\to\infty$. We will clarify the phrasing in the revised manuscript. >(eq 13) is there a unique solution in this system of 10 equations of 10 variables ? While at this point we do not have a proof of (13) having a single solution, in all the studied cases, only a single solution was found, up to symmetry. (Note that indeed if $\sigma$ is odd, the function $f$ is invariant under $w\to -w$. Therefore, for each fixed point of (13), the solution obtained by flipping the sign of $m$ is also a solution, but leads to the same learning metrics $\theta$ and mse.) Further, this solution agrees with numerical experiments. >(line 219) regarding the Gaussian universality, do you need to perform any normalization on the data for this to hold ? The dimension of MNIST seems quite small, what is it in your mind for the universality to hold ? In Fig.2, the datasets are indeed flattened, centered, as described in Appendix D. Since the components of the resulting vectors are comprised between $0$ and $255$ (as they correspond to color levels), we further divide them by $400$ so as to have components of order $1$. The precise value of this normalization was not found to impact the agreement between the theory and simulations. Note that the quantitative characterization of the generic conditions under which Gaussian universality holds for real datasets is still very much ongoing work in machine learning theory, even for supervised learning settings. Informally, in the present work, the intuition lies in the fact that the width of the hidden layer is finite, and thus much smaller than the input dimension, leading to some form of central limit theorem to hold. Finally, note that Gaussian universality has also been observed in dimension $784$ in supervised regression settings, see e.g. [47]. --- Rebuttal Comment 1.1: Title: accept Comment: Dear authors, Thanks for your detailed answer. I now understand better your contributions and have raised my score.
Summary: The authors set out to characterize the non-linear behavior of denoising auto-encoders (DAEs), for Gaussian mixtures, in the high dimensional limit with the number of hidden units being fixed. The authors particularly tease out the role of the skip-connection, compared to the reconstruction auto-encoder (RAE) which is known to essentially perform principal component analysis (PCA). Using the replica method, the authors obtain closed-form expressions for the mean-squared error (MSE), as well as the cosine-similarity (w.r.t cluster means). The obtained formulae are supported by experiments on both synthetic and real data sets, clearly highlighting the role of the sample complexity and the noise level. Strengths: - Presents an effective characterization of (non-linear) DAE behavior on Gaussian mixtures, clearly explaining the role of the reconstruction and scaling components supported by compelling empirical evidence (also on real data). - Draws a number of important conclusions, pointing to fruitful directions of future work, e.g., L200, L218, L278. Weaknesses: One would wish the long sequence of mathematical expressions could be made less opaque. - This is remedied by a seemingly complete and well-composed supplementary. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: **Technical comments:** - L178: could you briefly explain the asymptotic dependence on $\alpha$? - L212: could you elaborate on the choice of closely related classes? - I wonder how the method fairs with a mixture of more than 2 classes. - Is it true that $K=2$ also for the MNIST experiments? **Presentation comments:** - L158: perhaps it helps to discuss this rationale before stating the assumptions. - Understandably, it is an essential part of the contribution to provide those closed-form expressions, but I wonder which subset of the readers would benefit from having Eqs.13 and 14 in the main paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Dedicating more space to help make the theoretical concepts and techniques employed more accessible, i.e., to a broader set of readers, would have been greatly appreciated. It is a limitation of this reviewer that I'm unable to assess the full scale of the theoretical derivations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of our work. We address below their questions: >One would wish the long sequence of mathematical expressions could be made less opaque. We will add further discussion beneath (13) and (14) in the revised manuscript, and provide qualitative insights into their meaning and implications. >L178: could you briefly explain the asymptotic dependence on $\alpha$ The characterization of Corollary 3.5 involves the sample complexity $\alpha=n/d$, whereas previous studies typically assumed an infinite amount of available training data ($\alpha\to \infty$). Our work therefore allows to characterize the learning metrics when learning from finite data sets. It can further be shown from Corollary $3.5$ that the mse of RAEs asymptotically converges to its infinite data limit $\alpha \to\infty$ as $ O(\frac{1}{\alpha})$. We will add this computation and the relevant discussion in the supplementary material of the revised manuscript. >L212: could you elaborate on the choice of closely related classes? As discussed in section (l.278-290), in order to have a good denoising performance, the DAE has to learn the data structure, given by the cluster mean $\mu$. A priori, one expects the learning problem to be more challenging -and therefore more interesting- when the clusters are close and less distinguishable, i.e. when $\mu$ is small. Our choice of closely related classes for the real data experiments stems from that qualitative intuition. We will include further discussion in the revised version of the manuscript. >I wonder how the method fairs with a mixture of more than 2 classes. Is it true that also for the MNIST experiments? We indeed took for simplicity $K=2$ also in MNIST experiments, by retaining $2$ out of the $10$ classes, assuming that each class can be modelled by a Gaussian cluster. While we state the asymptotic formulae for any $K$ of order $1$, as the phenomenology of the results we obtain is already rich and novel for $K=2$, we left the exploration of the more general case for future work. >L158: perhaps it helps to discuss this rationale before stating the assumptions. We agree that mentionning that assumptions 3.1 and 3.2 can be relaxed before stating them would improve the readability. We will take this into account in the revised manuscript. >Understandably, it is an essential part of the contribution to provide those closed-form expressions, but I wonder which subset of the readers would benefit from having Eqs.13 and 14 in the main paper. (13-14) have been included in the main paper so that the result statement is self-contained, thereby avoiding the need of referencing equations placed elsewhere in the manuscript. We believe that the inclusion of further discussion thereof, as suggested in the reviewer's previous remark, will provide further insight into these equations and justify their position in the main text. >Dedicating more space to help make the theoretical concepts and techniques employed more accessible, i.e., to a broader set of readers, would have been greatly appreciated. We will devote further discussion to an explanation of the replica method, which we employ to derive the main result, so as to improve the readability for a broader audience. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you for addressing my comments. I'm adjusting my score in light of the discussion with reviewer ZGsS.
Rebuttal 1: Rebuttal: We attach here a .pdf, containing additional figures which we refer to in the separate rebuttals. Pdf: /pdf/79c55a2ca4fc0047eaeefcfab83c8995f6dcfb6a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Provable benefits of score matching
Accept (spotlight)
Summary: The authors describe an exponential family where the sufficient statistic $T(x)$ contains all non-constant monomials of degree $\leq d$, the background density is $h(x) = \exp(-\sum_{i=1}^n x_i^{d+1})$, and the parameter $\theta$ is constrained to have infinity norm bounded by $B$. Using a reduction from $3\mathsf{SAT}$, they show that under this exponential family, finding the MLE is NP hard, and thus unless $\mathsf{NP} = \mathsf{RP}$, that it would take time exponential in the random vector dimension $n$ to compute the MLE. They also show that the MLE has asymptotic sample efficiency $(nB)^{O(d^3)}$. They show that score matching also has asymptotic sample efficiency $(nB)^{O(d^3)}$. Unlike the MLE, score matching can be solved in time polynomial in the dimension of $\theta$, which if $d$ is considered constant, is polynomial in $n$. This is because the objective corresponding to score matching is convex in $\theta$. This is a concrete example of a situation where score matching has a provable advantage over the MLE (same asymptotic sample efficiency, but one needs exponential time unless $\mathsf{NP} = \mathsf{RP}$ and the other is polynomial time). Strengths: - clear contribution - relevant related work is discussed - this is a significant result (rigorous justification for potential benefit of score matching over MLE) Weaknesses: No glaring weaknesses, but the paper was a bit difficult to follow; I found it hard to connect the math-heavy theorem/lemma statements together. For example, I did not understand how Lemmas 4.2 and 4.3 built toward Lemma 4.4 on my initial read. Typos spotted: - Equation under line 172, $\|\theta\|_2^2$ should be $\|\theta^*\|_2^2$? - Line 482, $f''(x) = 56\gamma x^6 - 2\beta + 6\beta x^2$ should be $f''(x) = 56\gamma x^6 - 4\beta + 12\beta x^2$ I think, but this doesn't affect anything - Lemma A.6, the $(2/e)^n$ should be replaced with $(e/2)^n$ - Math at the bottom of page 16, second line, you want to minimize the exponent so instead of $-BM(BM)^{-d}$ it should be $-BM(BM)^{-1}$ right? I don't think this affects anything though since you bound it by $-1$ regardless - Math under line 519, I'm not sure that $(n+\ell)\log(2L) + 2 + n\log(BM) \leq \frac14 L^{d+1}$, unless some lower bound on $d$ is assumed? Like I'm not sure if that holds for $d=1$? - Lemma B.3, bound on Laplacian in the Lemma statement doesn't seem to match the derivation (exponent of 2d vs 4d) - Line 564 - this appears to be claiming that $\sum_{j=0}^k \binom{k}{j}^2 = 2^k$, which is false? I think you could bound it by $2^{2k}$ though by Cauchy-Schwarz, so it probably doesn't matter, just changes some constants down the line. - Line 573 - $B_\iota \to W_\iota$ - Line 583 - $\alpha_i \to d_i$ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Why is a nonzero constant not allowed for the polynomial defined by the sufficient statistic of the exponential family? It wasn't clear to me, also why that was a requirement for Lemma 4.4 to hold. Does that cause a problem? - This does not appear necessary to have in this work to me, but how close are we to obtaining finite sample bounds for score matching, analogous to the ones (I assume there are?) for MLE? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and for a very careful reading. We'll fix the typos -- as the reviewer notes, a few constants will have to be updated but the main results are all unchanged (up to some constant factors in the exponents). To be a bit more precise about the more mathematical typos: **Q:** *``Math at the bottom of page 16''* You're entirely right, it should be $-BM(BM)^{-1}$; as you say this doesn't change anything. **Q:** *``Math under line 519"* Instead of $L_0 := \max(\ell, BM2^{d+1})$ we should define $L_0 := 32\max(\ell,BM2^{d+1})$. Now in line 519, since $L \geq L_0$ and $B \geq 1$ and $M \geq n$, we can show that $\frac{1}{4}L^{d+1} \geq (n+\ell)\log(2L) + 2 + n\log(BM)$. Indeed $(n+\ell)\log(2L) \leq 2\max(n,\ell) \cdot L \leq L^2/16$. Similarly $2 \leq L^2/16$ and $n\log(BM) \leq L^2/16$. Finally $L^2 \leq L^{d+1}$. This adds a constant factor of $32^\ell$ to the moment bound in the lemma statement, which is not substantive. **Q:** *``Lemma B.3''* Yes, thanks, we'll update the statement to match the derivation (the exponents are off by a factor of $2$). **Q:** *``Line 564''* Yes, since each binomial is at most $2^k$ we can bound the stated quantity by $2^{2k}$, which changes a constant in the exponent. **Q:** *``Why is a nonzero constant not allowed for the polynomial defined by the sufficient statistic of the exponential family? It wasn't clear to me, also why that was a requirement for Lemma 4.4 to hold. Does that cause a problem?''* Including a nonzero constant as a sufficient statistic wouldn't actually change the family of distributions captured, since an additive constant in the exponent of the density is canceled out by the constant of proportionality. Note that as a result the parameter corresponding to the constant statistic would not even be statistically identifiable. This shows up in Lemma 4.4, where if we allowed e.g. the constant polynomial $f=1$, this has variance $0$ but monomial norm $1$ so the lemma would not be true. **Q:** *``This does not appear necessary to have in this work to me, but how close are we to obtaining finite sample bounds for score matching, analogous to the ones (I assume there are?) for MLE?''* We believe that it's probably possible to get analogous finite-sample guarantees by similar techniques. In particular, the prior work by Koehler, Hecket, and Risteski (which shows that the asymptotic efficiency of SM and MLE are related via a restricted Poincare constant) also shows that the finite-sample efficiency can be bounded in terms of a restricted log Sobolev constant and a Rademacher bound. It's certainly not immediate from our results, but we would speculate that our techniques for bounding the restricted Poincare constant can extend to bounding the restricted log Sobolev constant, and that the Rademacher complexity should be bounded via standard arguments. It would then remain to convert the KL-divergence error bound (obtained in Theorem 1 of their work) into a parameter error bound, which ought to follow from our bounds on the Fisher information. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to address my comments!
Summary: This paper provides an example of fitting exponential family models for which score matching and MLE are both statistically efficient, but MLE is computationally hard to optimize. Strengths: The strength is in the construction of an example to showcase the benefit of score matching over MLE. Weaknesses: Several aspects could be improved. 1. First, for the computational lower bound, the points to evaluation loss and gradient is worst case. When one has samples from such distribution, is solving MLE still computationally hard? 2. Although the aim of this paper is to provide examples to advocate for score matching, but for the distribution family proposed in the paper, are there simple (or simpler) estimators that are both computationally and statistically efficient? 3. In the paper, the analysis of the statistical performance is quite crude, which makes it hard to see what's the statistical cost one needs to pay when computation is the restriction. Is there proper computation-statistic tradeoff for the model considered in the paper? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why constant is removed from the construction of the sufficient statistics? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. To address their questions: **Q:** *``First, for the computational lower bound, the points to evaluation loss and gradient is worst case. When one has samples from such distribution, is solving MLE still computationally hard?''* Good question; unfortunately proving NP-hardness of average-case problems (i.e. when the input is drawn from a nice distribution) is generally not doable. One alternative is to prove a reduction from some conjecturally average-case hard problem such as Planted Clique, but average-case reductions are notoriously tricky. A final alternative common in theoretical computer science is to prove failure of some restricted class of algorithms (e.g. low-degree polynomials). All these alternatives are potentially doable, but would become technically involved and stray from the crux of the matter \--- with enough samples, score matching and MLE are essentially just two different techniques for solving the same computational problem, so our main goal with the worst-case computational lower bounds is to illustrate computational difficulties faced by a standard implementation/analysis of MLE. **Q:** *``Although the aim of this paper is to provide examples to advocate for score matching, but for the distribution family proposed in the paper, are there simple (or simpler) estimators that are both computationally and statistically efficient?''* We are not aware of any other computationally and statistically efficient estimators for this family. Perhaps the closest related work is the following paper, which suggests a computationally efficient estimator for learning some exponential family distributions. However, they require the family to have bounded support, in addition to several (rather complex) norm assumptions. Shah, Shah, and Wornell. \emph{``A Computationally Efficient Method for Learning Exponential Family Distributions".} **Q:** *``In the paper, the analysis of the statistical performance is quite crude, which makes it hard to see what's the statistical cost one needs to pay when computation is the restriction. Is there proper computation-statistic tradeoff for the model considered in the paper?''* Understanding the statistical cost of score matching at a more fine-grained level is an interesting direction for future work. It is possible that score matching matches the statistical efficiency MLE, although it seems more likely there is some polynomial gap. Speaking more broadly, we are not aware of *any* continuous exponential family with a provable statistical/computational tradeoff. Of course, proving such a lower bound is orthogonal to the purpose of this work, which was simply to show that score matching is roughly as efficient as MLE without the computational drawbacks. But this is still useful context and helps explain why proving lower bounds and separations is challenging even for exponential families. We will add this discussion. **Q:** *``Why constant is removed from the construction of the sufficient statistics?''* Including a nonzero constant as a sufficient statistic wouldn't actually change the family of distributions captured, since an additive constant in the exponent of the density is canceled out by the constant of proportionality. Note that as a result the parameter corresponding to the constant statistic would not even be statistically identifiable. This shows up in Lemma 4.4, where if we allowed e.g. the constant polynomial $f=1$, this has variance $0$ but monomial norm $1$ so the lemma would not be true. --- Rebuttal Comment 1.1: Comment: Thanks for the response to address my questions.
Summary: In this paper, the authors present a mathematical setting where Score Matching (SM) method has more statistical benefits than Maximum Likelihood (ML) technique, when estimating a parameterized probability distribution $p_\theta \in P(\mathbb{R}^n)$ known up to a normalizing constant $Z_\theta$. In particular, they describe an explicit exponential family of distributions $F$ for which SM loss $L_{SM}$ is efficient to compute, with same statistical efficiency as ML loss $L_{ML}$ (Theorem 2 and 3), while $L_{ML}$ is shown to be intractable in polynomial time depending on the parameters of this family (Theorem 1). Given an odd integer $d$ and $B>0$, any distribution $p_\theta \in F$ is notably defined by (i) its vector of sufficient statistics, which consists of all monomials in $x^1,..., x_n$ of at least degree $1$ and at most degree $d$, and (ii) its parameter $\theta$, which lies in the $\ell_\infty$-ball with radius $B$. This work is the first one to put in perspective the statistical efficiency of SM and ML for a large family of continuous probability distributions. Three main theoretical results are stated here. In Theorem 1, the authors prove that, for any $p_\theta \in F$ (with $d=7$) and any set of $N$ independent samples from $p_\theta$, it is NP-hard (in $n$ and $N$) to provide an accurate approximation of $L_{ML}(\theta)$ and $\nabla L_{ML}(\theta)$. This result comes from the difficulty to approximate $Z_\theta$, which is necessary to compute $L_{ML}(\theta)$, while it does not appear in $L_{SM}(\theta)$. In Theorem 2, they derive an upper bound of the $\ell_2$-error between $\theta$ and its ML estimator obtained via $N$ samples, in the limit where $N\to \infty$, for any $p_\theta \in F$. Their proof relies on the asymptotic result given by [1] and consists of lower bounding the smallest eigenvalue of the Fisher information matrix of $p_\theta$. In Theorem 3, they derive the same upper bound in the case of the SM estimator for any $p_\theta \in F$, by invoking the asymptotic result from [2] and bounding the Poincaré constant of $p_\theta$. Combined together, Theorems 2 and 3 show that ML and SM techniques roundly have the same statistical efficiency. [1] Asymptotic statistics, Van der Vaart, 2000. [2] Statistical Efficiency of Score Matching: The View from Isoperimetry, Koehler et al., 2022. Strengths: - This work provides a fair comparison between statistical efficiency for SM and ML technique, which is of primary interest for the community of machine learning. - Although Theorems 2 and 3 rely on important theoretical results, their results are not straightforward to obtain, and their proofs are original and clear to understand. Weaknesses: Although the theoretical results presented here are interesting in their own, they should be combined with numerical experiments, which should illustrate the trade-off between these two methods in practice, depending on the parameters of the exponential family. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Why do the authors do not keep the whole dependence in $d$ in the bound for Theorems 2 and 3 ? Does it change something between SM and ML ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: In this work, the asymptotic theory is elaborated on the fact that the MLE and SM estimators can be computed exactly from a collection of samples. However, to compare these two techniques for practical purpose, one should include approximation error on the estimator. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. To address the reviewer's questions: **Q:** *``Why do the authors do not keep the whole dependence in $d$ in the bound for Theorems 2 and 3 ? Does it change something between SM and ML ?''* In both cases the dependence is $O(d^3)$. We stated the dependence as $\textsf{poly}(d)$ in the introduction just to make the theorems cleaner \--- we are thinking of $d$ as a small constant, and the key takeaway is that for both score matching and maximum likelihood, the sample complexity is $\textsf{poly}(n,B)$. It's certainly plausible that maximum likelihood may achieve a smaller constant in the exponent than score matching; proving such a statistical separation is an interesting direction for future work but complementary to the purpose of this work, which is to establish conditions under which they achieve comparable rates. **Q:** *``In this work, the asymptotic theory is elaborated on the fact that the MLE and SM estimators can be computed exactly from a collection of samples. However, to compare these two techniques for practical purpose, one should include approximation error on the estimator.''* It's true that there may be some approximation error in computing the MLE from finite samples (even ignoring our evidence that this task may be computationally intractable). However, as shown in Equation (2) in our paper, the score matching estimator has a closed form in our setting, so there is no approximation error. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Although I find the theoretical result really interesting and non-trivial, I think that there should be numerical experiments that support the theory. That is why I keep my score unchanged.
Summary: The paper attempts to elucidate the theoretical reasoning for the benefits seen in score matching. The author proposes a family of exponential distributions that can efficiently compute the score matching loss while having a comparable statistical efficiency to that of maximum likelihood. Strengths: - The paper is well written/organized and easy to follow. The author does a good job introducing related theoretical information. - Equations are extensively described with a thorough description. Weaknesses: - There are no experimental results. It would be nice if there were some machine learning models that were optimized with score matching and ML and compared with each other. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - What is the benefit of using this score matching method than taking the denoising score matching approach? - Is this approach more computational efficient than denoising score matching? How well those this score matching approach scale for high dimensional data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The author has adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. To address the reviewer's questions: **Q:** *``What is the benefit of using this score matching method than taking the denoising score matching approach?''* Recall that the latter is score matching applied to an annealed version of the distribution. It's generally believed that the annealing improves the statistical performance of score matching. It's also generally believed that score matching (even without annealing) has computational benefits over MLE. The goal of this paper was to provide a provable example of the latter. The fact that annealing may further improve statistical efficiency in a sense only strengthens the thesis of our paper: that the score matching technique has provable computational benefits over the standard MLE technique, without sacrificing statistical efficiency. **Q:** *``Is this approach more computational efficient than denoising score matching? How well those this score matching approach scale for high dimensional data?"* For exponential families, the score matching loss is quadratic, so the global optimum actually has a closed-form expression (see equation (2) in the paper), which can be computed in time polynomial in the number of sufficient statistics and linear in the number of samples. In high dimensional settings one might be interesting in even stronger computational guarantees, but we have not yet investigated whether this is possible, and if it is would likely require new techniques. It's possible that the denoising score matching loss could also be optimized efficiently, but it's not immediately clear one way or the other. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my score the same.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Collaboratively Learning Linear Models with Structured Missing Data
Accept (poster)
Summary: The paper discusses the idea of estimating least squares collaboratively when each agent has access to a different set of features of the same data. The aim is to design an effective and efficient algorithm in terms of communication cost (various agents transferring/communicating information/data). The paper introduces a new algorithm *COLLAB* that is efficient and also applicable in security fields where features can not be transferred between agents/sensors/machines/systems. The authors theoretically prove that the proposed algorithm *COLLAB* is minimax optimal and perform a set of experiments to showcase the capability of the proposed algorithm *COLLAB*. Strengths: The paper discusses the setup typical in many fields (especially security applications) where data can not be transferred between agents due to security reasons or input-output constraints (network bottleneck). In the setup discussed, when the agent has a linear model, ordinary least squares (OLS) is a logical way to solve and get the parameter value. The idea behind *COLLAB* is logical and intuitive however needs more clarity in the presentation. The comparison against the imputation methods is also logical, as these are go-to models for such setups. The authors derive local minimax lower bounds to showcase that the proposed algorithm *COLLAB* is very close to the optimal, which is interesting. For the correctness of this section, I would rely on other reviewers as I was not able to follow the derivations clearly. The experiments are also a good mixture of real-world and synthetic data sets where the capability of the proposed algorithm is shown. Weaknesses: * A nice idea, but some assumptions are very hard/limited, and the evaluations are limited. * Some details (especially Sections 3.1 and 3.2) are presented convolutedly and are hard to grab. Some minor clarifications: - L110, is the `x` written same as L99? It probably should be `X`? - L132, L133: I am still not sure what `(i)` refers to here. - In general, I believe it is convenient to stick to the notation where bold **`X`** is a matrix, bold **`x`** is a vector and `x` is a scalar. It makes it easy to follow the equations. * Evaluations are limited. More empirical evaluations are to be performed to showcase the results of the proposed algorithm against other methods. A limitation here is the linear model in the agents, which limits the modelling capability of the model. * Considering non-linear cases as well would make the paper more solid. Then the agents are more flexible, and the authors can experiment with more complex data sets. * Computation benefits should be shown empirically as well. Currently, it is only shown theoretically. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * The paper assumes that all the agents have the same linear model. How restrictive is it? One of the agents has a $R^{16}$ dimension feature vector, whereas the other might have a $R^2$ dimension feature vector. So, the same complex model does not seem logical here. * Second assumption is that the agent has enough data to estimate $\Sigma$. How will it impact the algorithm if there is little data and $\Sigma$ is unreliable/biased? * A toy example to plot various parameter $\theta_i$ of the agents, how they combine to give $\theta_{global}$, and how they compare with the oracle value of $\theta$. * A setup where the computational benefits can be shown empirically as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors do not discuss them, and I do not see a limitation directly of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Response to W1: limited assumptions and evaluations**: We admit that our current theory relies heavily on Gaussianity assumption.These generalizations have various technical challenges, which make giving strong theoretical guarantees (i.e., what we were able to do in our linear Gaussian model) very difficult. In our work, we opt to provide these strong guarantees in lieu of more generality. We point out that the experiments are on real data, suggesting potential generalization of our theory beyond the Gaussian case. Removing linearity/Gaussianity would be an interesting future direction that we have discussed in Section 7. Having said that, we are happy to explain in more detail if you want further clarification on particular assumptions/evaluations. **Response to W2: minor clarifications**: Thank you for the suggestions, we will make clarifications in the camera-ready version for the following points. 2.1: Yes, the $x\in \mathbb{R}^d$ in line 110 is the same as the $x\in\mathbb{R}^d$ in line 99. 2.2: The $(i)$ refers to the second equality in the equation between lines 131 and 132. 2.3: This is a good suggestion, and we will try to make the notation more clear in the camera-ready version. **Response to W3: limited evaluations**: We partially respond in Response to weakness 1. We want to emphasize that we believe our main contribution is theory. The focus of our preliminary experiments is to show that our method does not overfit to the Gaussian data setting. **Response to W4: non-linear cases**: We respond in Response to weakness 1. **Response to W5: empirical justification of computational benefits**: Thank you for bringing this up.e think there is some confusion, as our focus is on communication cost instead of computational cost. Please correct us if we misunderstood. We note that the communication costs in Table 1 are actually not asymptotic, local imputation requires communicating $d^2$ real numbers, global imputation $nd_i$ real numbers, and our method $d_i^2$ real numbers; we will clarify this in the camera-ready version. For this reason, we believe benchmarking real communication costs is not necessary. **Response to Q1: dimensional differences among agents**: Indeed, our model setup assumes the same underlying model with agent-dependent, unobserved dimensions. We want to clarify that it is not restrictive to dimensional differences among agents. Our setup allows one agent to observe data in $\mathbb{R}^{16}$ dimensions while another observes data in $\mathbb{R}^{2}$ dimensions; the second agent would just observe fewer dimensions than the first agent. It would be very challenging to develop theory when the underlying models are different for each agent, as there is no shared global model we can analyze. We feel this is beyond the scope of the discussion in our paper. **Response to Q2: unreliable/biased estimate of the covariance**: This is an interesting point. We first want to point out a potential confusion that could be caused by a typo in the definition of $\hat{W}\_i^g$ (line 155). The numerator is supposed to be the sample sub-covariance $\hat{\Sigma}\_{i+} = X\_{i+}^\top X\_{i+}/n$ instead of the exact sub-covariance matrix $\Sigma\_{i+}$. In fact, we use $\hat{\Sigma}\_{i+}$ in Algorithm 1 and in the proof of Corollary 3.2 in the submitted supplementary materials. It is not clear what is the optimal procedure if we do not have a consistent estimate for the population covariance, which essentially boils down to the harder problem of distributional shift. This could be a future direction and we will include it in our discussion. **Response to Q3: suggestion of a toy example**: Thank you for this experimental suggestion. As $\theta_i$ are in general high dimensional (>3), the plots we can visually make would probably be $\ell\_2$ error against the ground-truth, which might be less clear, so we opted for doing real data experiments due to page-limit. Please let us know if you have any suggestions of how to visualize a toy example. **Response to Q4: computational benefits**: See our response to W5. --- Rebuttal Comment 1.1: Comment: Thank you for the response! After going through other reviews and replies, I would stay with my original score. In my opinion, a discussion on current assumptions and how they can be relaxed, agents with different models as agents can have different model complexity, communication cost, and a simple example to explain the benefits of the proposed model intuitively would be a good addition and make the paper complete.
Summary: The authors investigate collaborative learning of least squares estimates for multiple agents with varying feature subsets. The goal is to coordinate the agents efficiently to achieve optimal estimators without exchanging labeled data. To address this, the authors propose the distributed algorithm Collab, consisting of local training, aggregation, and distribution steps. Despite not sharing labeled data, Collab approaches near-asymptotic local minimax optimality, outperforming methods that do utilize labeled data. They validate our approach through experiments on real and synthetic datasets. Strengths: Distributed learning with heterogeneous data sources is a problem of broad interest. In this study, the authors tackle this problem in a simplified setting and provide robust theoretical guarantees. Their theoretical results are strong, demonstrating the solid foundation of their approach. In addition, the point of minimizing communication resources is an interesting angle. Furthermore, the experimental results are compelling, further supporting the effectiveness of their method. Weaknesses: - Settings need more justification: the authors discuss a setting where a linear regression problem is running on satellites with completely different features. This seems restrictive and can the authors elaborate more on the motivation of their study? - Random X with zero mean. If X is not zero-mean, then the regression with partial information is not consistent anymore (different features may have correlations). I can imagine this is a very common scenario and can the authors provide some discussions on this? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In Section 7, is that the numerator in *generalizing to non-linear models* requires sharing global information for $x_{i+}$? What are the possible ways to alleviate this constraint? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: I do not foresee any potential societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Response to justifications for our setting**: Regarding the satellite application, we agree it is a stylized example and have expanded the introduction to discuss other potential applications like sensor networks, weather stations, and hospitals. The key aspects we wish to capture are: 1) agents have different features due to heterogeneous data sources, 2) agents wish to leverage correlations and train collaborative models, 3) communication constraints exist. We believe these aspects apply broadly. The high level motivation for our work is to study how practitioners should handle hardware heterogeneity in communication-constrained collaborative learning environments. Works in federated learning have thought about how to do training with heterogeneous computational hardware; e.g., learning with client phones that have different processing power [Yang et al.]. But little attention has been focused on settings on heterogeneous measurement devices; e.g., what would we do if the features collected by each agent were heterogeneous? As we show, this problem is challenging, even in the simplest linear regression setting. **Response to zero-mean data**: Thank you for bringing up the point about zero-mean assumption. It is standard practice in machine learning to first center the data by subtracting the mean. This pre-processing step results in zero-mean features. This is why many other theory papers study the zero mean setting (e.g., Hastie et al., [3]). In fact, we followed this approach in the real census data experiment by centering each feature before model training and evaluation. As seen in Figure 1, COLLAB either outperforms or is competitive against baselines, indicating it is robust even when the zero-mean assumption is violated in practice. **Response to the question**: This is a good question. We want to clarify that the formula is under the expectation of $x\_{i+}$ which is a population quantity that each agent can estimate consistently and locally. So, no, the numerator does not require sharing individual data samples $x\_{i+}$, as we assume that each agent has $n$ labeled samples with $n$ growing to infinity (over the subset of the features they can observe). Thus, each agent can estimate the numerator individually with the samples they have access to and with their own local model. **Additional References** Hastie, Trevor J. et al. “Surprises in High-Dimensional Ridgeless Least Squares Interpolation.” Annals of statistics 50 2 (2019): 949-986 . Yang, Chengxu et al. “Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data.” Proceedings of the Web Conference 2021 (2021): n. pag. --- Rebuttal Comment 1.1: Comment: For the "response to the question": thanks for the answer! This potentially inspires a practical iterative algorithm for non-linear settings (e.g., $\theta$ is the parameter for a complex neural network): to solve $\theta$ globally, at each step the global processor sends the current $\theta$ to the local processors, each local processor then computes the gradient (or a sub-gradient) of $\theta$ for the proposed local loss function and return to the global processor. Based on the gradient information, the global processor then updates its $\theta$ accordingly. This procedure can be conducted iteratively. One particular choice for $f(x_i^{+}; T_{i}\theta)$ can be $f([0, 0, 0, ..., x_{i}^{+}, ..., 0, 0, 0]; \theta)$ (taking the inputs for other features as 0 for the local processor $i$ when computing the prediction given the global $\theta$, commonly used for deep learning when some features are missing). This can be viewed as a generalization of existing federated learning algorithms to the heterogenous feature-observation setting. I hope the authors can elaborate on this in their revised version as I believe this will likely make the impact of this work significantly larger. --- Reply to Comment 1.1.1: Comment: That is a good point. There could be possible generalizations of our work to heterogeneous feature-observation federated learning settings. The proposed loss could indeed be minimized in a distributed/federated manner to aggregate local models iteratively. If we understand you correctly, we believe the algorithm would look like 1. Send $\hat{\theta}$ to each agent. Have the agent minimize their own loss (initialized at $\hat{\theta}$), and call the final parameter $\hat{\theta}_i$. 2. Minimize the proposed loss function (the one between lines 324 and 325) in a federated way as a means of aggregating the model. Call the final parameter $\hat{\theta}$. 3. Repeat. Thank you for bringing up this connection to federated learning. We will add this discussion to the camera ready version.
Summary: The paper studies statistical inference in learning a linear regression model in the cross-silo or vertical federated learning setting. The paper formulates the problem as a missing data problem and chooses single imputation methods to deal with the associated inference of the common parameter. The paper shows the theoretical properties of the proposed estimator under some assumptions. Finally the paper compares their results with other existing methods. Strengths: The paper studies a very important problem and relatively easy to follow. Weaknesses: 1. The paper is not carefully written with confusing notations and problem formulation: 1(a). In Section 2, it says that “the $i^{th}$ agent has data $(x_{i+}, y)$” and then later uses $y_i$ to denote the $i^{th}$ agent labeled data. It is unclear whether these labels are the same or not. If they are the same, this is an unrealistic assumption in practice and obviously contrary to the common assumption in the literature that only one active party has access to the label data. 1(b). Equation (1) defines a weighted MSE, with expectation assessed with respect to the feature $x$, which is problematic. Why not consider the typical unweighted MSE with expectation evaluated with respect to $y$ in regression setting? 2. It seems that the contribution of the paper is to apply existing single imputation based missing data method to vertical federated learning setting, with the exception that here they assume each agent has access to its own label data (which again is problematic). 3. The paper does compare their methods with several other methods in the experiment. However, there are not enough discussions of these methods. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the motivation to consider minimizing a weighted empirical loss in Equation (3)? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper considers single imputation, claiming that this is okay given that the goal is estimation error instead of confidence intervals. However, many theoretical results given in the paper are about the asymptotical distributions of the proposed estimators. Do these asymptotical distributions properly consider the uncertainty associated with the missing data imputation? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Based on your comments, we believe there is a misunderstanding**. We are not doing vertical federated learning [Liu et al.]. Unlike vertical federated learning, agents in our framework are not measuring the same underlying set of users. Vertical federated learning is not the right way to model settings like the satellite and seismic sensor estimation problem examples we discussed in the introduction, as this would mean, each agent (i.e., the sensor) would take (different) measurements of the same locations at the same time. In the setting we study, each agent has their own input data and labels. Thank you for pointing this out; we will clarify this in the camera ready version. **Response to 1a**: We believe this confusion stems from some notational issues. In line 102, $(x, y)$ refers to a single sample drawn from the generating distribution. In line 104, $y\_i\in \mathbb{R}^n$ refers to the vector of labels the agent observes. To be clear, as said in line 103, each agent gets $n$ draws of $(x, y)$ from the generating distribution and observes $X\_{i+}$ and $y\_i$: the labels are not the same across agents. Furthermore, just to make sure there is no confusion, the features observed by agent $i$ and agent $j$ ($j\neq i$) are *not* different subsets of the same $n$ feature vectors: each agent draws $n$ fresh covariates and observes a subset of the covariates. Finally, as we are not doing vertical federated learning, we do not have an “active party” constraint on the labeled data. We will make this very clear in the camera ready version. **Response to 1b**: We are confident that what we wrote is the standard notion of prediction/generalization error for a fresh sample $(x \in \mathbb{R}^d, y \in \mathbb{R})$. In the regression setting, weighted MSE with respect to the data distribution is the same up to an additive constant as the test error on a fresh sample: $\mathbb{E}\_{x, y}[(\langle x, \hat{\theta}\rangle) - y)^2] = \mathbb{E}\_x[(\langle x, \hat{\theta} \rangle - \langle x, \theta \rangle)^2] + \sigma^2 = \\|\hat{\theta} - \theta \\|\_{\Sigma}^2+ \sigma^2$. This formulation is standard (e.g., see section 2.1 in Hastie et al.). **Response to 2**: First, Collab is not doing single imputation at all. In fact, in section 4, we compare our method Collab against traditional single imputation methods. Second, we are not doing vertical federated learning as we discussed above. **Response to 3**: Can you provide more details about what discussion you feel is missing? Is there anything you want us to clarify? We are open to suggestions. We want to emphasize that while we chose to compare against imputation methods in Section 4, our method is asymptotically instance-optimal (as shown by our lower bounds), meaning that no algorithm could theoretically perform statistically better. In this sense, our lower-bound is the ultimate theoretical “baseline”. **Additional References** Hastie, Trevor J. et al. “Surprises in High-Dimensional Ridgeless Least Squares Interpolation.” Annals of statistics 50 2 (2019): 949-986 . Liu, Yang et al. “Vertical Federated Learning.” ArXiv abs/2211.12814 (2022): n. pag. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I've gone through the authors' rebuttal, and increased my score. --- Rebuttal 2: Title: Please read the rebuttal and other reviews Comment: Dear reviewer, The authors have posted a rebuttal. Please acknowledge that you have read it and indicate whether they have adequately addressed your concerns/comments. Your "strong reject" score indicates a significant technical flaw with the paper, and is in contradiction with the other scores on this paper. Please engage with the authors and clarify whether there is actually the technical flaw you're claiming. The author-reviewer discussion phase ends on Aug 21 so please discuss with the authors before that if you need any more clarifications. Thanks, AC
Summary: Summary ------- The paper studies collaborative linear regression when m agents attempt to collaboratively estimate a linear model, under communication constraints. Each agent i only observes di of the d features. A central server designs a protocol to elicit sufficient information from each agent and compute the parameter theta of the linear model so as to minimize communication costs and the estimation error. The authors present, what appears to be a near-complete solution, for the case where the covariates are distributed as a Gaussian and when the covariance matrix is known. This includes asymptotic normality results for the proposed estimator and lower bounds which match as n goes to infinity. The authors also compare their method against other baselines based on imputing data and show that their estimation errors are no worse but at significantly lower communication cost. Decision: While the paper is well-presented and pleasant to read, I am not an expert in this topic and did not have the time to go through the proofs in detail. As such, I am unable to evalute the technical merit of the paper (challenges of the problem, novelty of proof techniques). I have given a positive score with low confidence to reflect this but will heed to more expert reviewers during the discussion. Detailed comments ----------------- The authors have used local and global imputation baselines in Table 1 to show the communication benefits, and have gone on to show that their method does no worse theoretically than these methods. However, I am not sure if these are particularly strong baselines to compete with; for instance, I would not have expected communicating all data points to be necessary. In fact, at the outset, my intuition suggested a solution which computes local coefficients by each agent which are then aggregated by the server in an appropriate fashion. It would have been helpful if the authors had better illustrated the challenges in doing so. For instance, is the method in 3.1 the most natural way to solve this problem, or are there other naive ways to aggregate the coefficients that yield sub-optimal solutions? The same applies to the non-Gaussian case and the setting when the co-variance matrices are unknown. The authors could have done a better job of illustrating the challenges. In general, I did not get a sense for how challenging this problem setting was to appreciate the contributions by the authors. Do the results in Theorems 3.1-3.2 implicitly capture the difficulty of the problem in terms of the number of covariates each agent has access to? - For instance, in the worst case there could be only n samples (when there is a perfect partition of the covariates among the agents, and they have the same points), but in the best case there could be mn samples (when all agents have all the covariates and have distinct points) If so, is it possible to make this more explicit in the results. The paper is largely well-written. The problem is well-motivated, the setting is described clearly, and the method/results are organized well. I would have however liked to see a sketch of the main proof ideas and how they differ from similar results in the linear regression literature. A breakdown of the key proof challenges that the authors had to overcome would have also been useful. What was the reason for the discussion around o(n) communication complexity? It appears that local imputation methods and the method of the authors are able to achieve consistent estimation with communication cost that does not depend on n. In the experiments, the imputation-based methods outperform the method of the authors when there are more samples. Intuitively, I would have expected this since they are more communication-heavy. However, this is not the case in the low-sample regime. Can you explain why this is the case? Strengths: See above. Weaknesses: See above. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Response to baseline strength**: Our method is asymptotically instance-optimal (shown by our lower bounds), meaning that no algorithm could perform statistically better on any specific problem instance. In some sense, our lower-bound is the ultimate theoretical “baseline”. The motivation for comparing baseline algorithms which are “stronger” than our method in terms of communication cost is to ultimately show in the experiments section that Collab is not overfit to the assumptions of our theory and performs well against communication-ignorant, conventionally-adopted methods, like imputation. **Response to the suggested intuitive solution**: If we understand correctly, our method falls in the scope of your intuition—Collab takes the locally computed coefficients and debiases them using covariance information. We note that we need covariance information from the agents, otherwise the debiasing procedure would return a biased estimate of the parameter. Having said this, we do want to point out that our aggregation approach is novel to the best of our knowledge and not standard in the missing data literature. As we discuss in the related work, imputation-type algorithms are more standard, which is also why we chose to baseline against them in our theory and experiments. **Response to non-Gaussian/unknown covariance case**: We first want to point out a potential confusion that could be caused by a typo in the definition of $\hat{W}\_i^g$ (line 155). The numerator is supposed to be the sample sub-covariance $\hat{\Sigma}\_{i+} = X\_{i+}^\top X\_{i+}/n$ instead of the exact sub-covariance matrix $\Sigma\_{i+}$. In fact, we use $\hat{\Sigma}\_{i+}$ in Algorithm 1 and in the proof of Corollary 3.2 in the submitted supplementary materials. We discuss the non-gaussian setting in lines 147-150 and section 7; to summarize, in the gaussian setting, we can estimate $W_i^\star$ from the data we have access to. In the non-gaussian setting we cannot. We are happy to answer any specific questions you may have about this. **Response to the implicit difficulty in Theorem 3.1 and Corollary 3.2**: Yes, you are right. Theorem 3.1 and Corollary 3.2 (and our lower bounds) capture the difficulty of the problem in terms of the number of covariates each agent has access to. In fact, if we just look at the lower bound $C^g$ in the Gaussian setting and consider each of the summand. Note that $T\_i^\top W\_i^g T\_i = \frac{\Sigma - \Pi\_i^\top \begin{bmatrix} 0 & 0 \\\ 0 & \Gamma\_{i-} \end{bmatrix} \Pi\_i}{\\|\theta\_{i-}\\|\_{\Gamma\_{i-}}^2 + \sigma^2}$. If we have strictly more coordinates observed for the $i$-th agent, then the Schur complement $\Pi\_i^\top \begin{bmatrix} 0 & 0 \\\ 0 & \Gamma\_{i-} \end{bmatrix} \Pi\_i$ will be a smaller matrix and also $\\|\theta_{i-}\\|\_{\Gamma_{i-}}^2$ will be a smaller quantity, resulting in an overall larger $T\_i^\top W\_i^g T\_i $ and thus smaller uncertainty $C^g$ and therefore also smaller test error. We will add this discussion in the camera ready version of the paper. **Response to o(n) bandwidth**: Thank you for bringing this up. You raise a good point about our storytelling with your observation that the local imputation baseline also has o(n) communication cost. While hopefully we were convincing in the introduction of why o(n) bandwidth constraints are important in real settings like satellites and seismic sensors, we agree that we did not adequately justify why a communication reduction from $d^2$ to $d\_i^2$ is important. We present this justification here now, and we will add it to the camera ready version. One property of a good collaborative algorithm is that agents which are a part of the collective should be incentivized to welcome new agents. In Collab, adding new agents to the collective never increases the communication cost to any of the existing agents a part of the collective. On the other hand, in the local imputation baseline algorithm, new agents with data collected from new/different sensors increase the communication cost of other agents in the collective (i.e., because $d$ becomes larger). This implicitly incentivizes homogeneity of sensors within the collective, which is antithetical to the idea of leveraging diverse data to make better predictions. **Response to the low-sample imputation underperformance**: This is a good observation. We believe the phenomenon boils down to numerical instability also known as the double descent for linear regression when the number of samples $n$ and underlying dimension $d$ is comparable. We point out that our theory is asymptotic and thus the statistical optimality holds for $n \\gg d$, and therefore not predictive in the low-sample regime. Though not within the scope of our current theoretical setup, it is nontheless an interesting future direction to investigate what is the optimal procedure when $n$ is comparable to $d$ (for instance, whether adding regularization would help). We will add this to the discussion section of our work. --- Rebuttal Comment 1.1: Title: re: Rebuttal Comment: Thank you for the detailed rebuttal. I will wait for the discussion period before forming my final score. That said, I think the paper could certainly improve in terms of presentation. The authors could do a better job in terms of convincing the reader why this problem is challenging (for instance, a discussion of why the 'first intuitive/natural solution' would not work, and why a better algorithm is necessary). They should also convey the proof intuitions and techniques better in the main text. This will help reviewers (especially non-experts like me) better appreciate the contributions.
Rebuttal 1: Rebuttal: We want to clear up some possible confusion due to a typo we made. Our method Collab only needs the sample covariance $\hat{\Sigma}\_{i+} = X\_{i+}^\top X\_{i+}/n$ --- **not** the population covariance $\Sigma\_{i+}$ --- for our results to hold (see Algorithm 1 for the correct pseudocode). In other words, Collab does not need to know additional population information relative to the baselines. This potential confusion is likely caused by a typo in the definition of $\hat{W}\_i^g$ (line 155). The numerator is supposed to be the sample sub-covariance $\hat{\Sigma}\_{i+}$ instead of the exact sub-covariance matrix $\Sigma\_{i+}$. In fact, we use $\hat{\Sigma}\_{i+}$ in Algorithm 1 and in the proof of Corollary 3.2 in the submitted supplementary materials.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studied the problem of collaboratively learning least squares estimates with multiple agents, each of which only observes a different subset of the features. The authors proposed a distributed, semi-supervised algorithm called Collab consisting of three steps: 1) local training, 2) aggregation, and 3) distribution. The authors showed that the proposed Collab algorithm is nearly asymptotically local minimax optimal. The authors conducted experiments to verify their algorithm on real and synthetic data. Strengths: 1. This paper provides deep theoretical insights into their proposed Collab algorithms. The results in Theorems 4.1 and 4.2 that the performance of Collab is no worse than local imputation with collaboration and global imputation are novel and surprising. 2. The asymptotic local minimax lower bound is interesting and could be of independent interest. Weaknesses: Although the results in this paper are interesting, as the authors themselves admitted, they are only limited to the linear model and Gaussian features. But the authors did provide some interesting discussions in Section 7 on future directions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. In Step 2 of the main loop in the Collab algorithm, what happens if the estimate $\hat{\Sigma}_i$ is inaccurate? Could the authors analyze the impact of such errors? 2. Although the paper is well written in general, there are some minor typos. For example, in Line 162, it appears that the global estimator should be $\hat{\theta}^{clb}$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **Response to weaknesses**: Our theory is indeed limited to Gaussian features. We did experiments on non-Gaussian data in our Folktables experiment. Though it's only a preliminary experiment, we hope that it shows our method does not overfit to the Gaussian data setting. **Response to Q1**: noisy estimate of covariance: This is an interesting point. We first want to point out a potential confusion that could be caused by a typo in the definition of $\hat{W}_i^g$ (line 155). The numerator is supposed to be the sample sub-covariance $\hat{\Sigma}\_{i+} = X\_{i+}^\top X\_{i+}/n$ instead of the exact sub-covariance matrix $\Sigma\_{i+}$. In fact, we use $\hat{\Sigma}\_{i+}$ in Algorithm 1 and in the proof of Corollary 3.2 in the submitted supplementary materials. It is not clear what is the optimal procedure if we do not have a consistent estimate for the population covariance, which essentially boils down to the harder problem of distributional shift. This could be a future direction and we will include it in our discussion. **Response to Q2: typos**: Thanks for pointing this out. We will fix the typo you mentioned, the typo in the definition of $\hat{W}\_i^g$ (line 155), and other typos in the camera ready version of the paper. --- Rebuttal 2: Title: Please acknowledge rebuttal Comment: Dear reviewer, The authors have posted a rebuttal. Please acknowledge that you have read it and indicate whether they have adequately addressed your concerns/comments. The author-reviewer discussion phase ends on Aug 21 so please engage with the authors before that if you need any more clarifications. Thanks, AC
null
null
null
null
null
null
Optimal Block-wise Asymmetric Graph Construction for Graph-based Semi-supervised Learning
Accept (poster)
Summary: The paper presents an optimal asymmetric graph structure for the label inference phase in graph-based semi-supervised learning (GSSL). The key motivation or intuition proposed by the authors is that we need to differentiate the roles of labeled and unlabeled nodes. Therefore, the authors design an efficient block-wise graph learning algorithm with a global convergence guarantee. The proposed method is shown to be superior to SOTA graph construction methods in GSSL through extensive experiments on synthetic and real-world datasets. The paper addresses the challenge of constructing a high-quality graph, which significantly influences label prediction performance, and proposes a solution with theoretical motivations and benefits, such as enhanced robustness to noisy node features. Strengths: 1. Quality: The work is of high quality, with rigorous theoretical motivations and comprehensive experiments. First, the motivation is strongly supported by some theoretical analysis. The proposed asymmetric optimal graph structure is deduced from Definition 1 rigorously. The authors provide a comprehensive explanation of the optimization problem and the structure of the optimal affinity graph. Second, they also present a detailed implementation of the block-wise graph learning algorithm BAGL. The paper is well-referenced, indicating a thorough understanding of the existing literature. Third, the provided convergence analysis on BAGL is also rigorous. The global sublinear convergence rate in Theorem 3 makes sense to the reviewer. Forth, the experiments are comprehensive, including several comparisons from the aspects of prediction accuracy, efficiency, and convergence rate. 2. Clarity: The paper is well-structured and clear, with each section logically leading to the next. The authors provide clear definitions and explanations of complex concepts, making the paper accessible to readers with varying levels of expertise in the field. However, some parts of the appendix are suggested to move to the main body to give more background context on graph-based semi-supervised learning. The use of mathematical notation and diagrams further enhances the clarity of the paper. 3. Significance: The reviewer thinks the work makes some significant contributions to the GSSL field. First, the investigated problem is significant since most GSSL literature only focuses on the label inference step. This paper focuses on the neglected graph construction step instead, as the quality of the graph affects the subsequent step a lot. Second, the proposed method achieves the SOTA global convergence rate, contributing to the GSSL field significantly. Weaknesses: There are several potential improvements in this paper. 1. The background knowledge of the unified label inference framework can be elaborated. The details behind Eq.(1) should be provided to give readers without GSSL backgrounds more context. For example, Appendix B.2 should be added to the main body. Table 5 is very informative. 2. The recent graph structure learning method should be discussed or compared since the goal of GSL is also similar to the task investigated in this paper. 3. The conclusion part is short. More future work can be added. 4. Other comments in questions section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. While the paper provides a detailed explanation of the block-wise graph learning algorithm, it would be beneficial to have a more explicit discussion on the computational complexity of the algorithm. How does the algorithm scale with the size of the dataset? What are the implications of using this algorithm on large-scale datasets? 2. The paper could benefit from a discussion on potential real-world applications of the proposed method. In what specific domains or scenarios would this method be particularly useful? 3. It would be interesting to hear the authors' perspectives on the limitations of their proposed method and how they plan to address these in future work. Are there any specific challenges or difficulties that arose during the development of the method? 4. The paper mentions that the proposed method enhances the robustness of subsequent label inference algorithms. Could the authors elaborate on this? How does the method handle noise? Some theoretical analysis may be provided. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weaknesses and questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very thoughtful review with constructive suggestions. We appreciate the recognition of our optimal graph construction approach in GSSL. We are glad to know that you find our paper novel, high-quality, rigorous with solid theoretical insights, well-organized with good writing, and contributing to the GSSL field. We would also like to thank you for your suggestions for improvement and have addressed each of your points below. We hope these responses will address your concerns appropriately. ## 1. **Background knowledge of the unified label inference framework** Thanks for your suggestion. **The relevant background knowledge of the unified label inference framework can be found in Appendix B.2**. We will consider moving some text and Table 5 into the main body for better presentation. ## 2. **Graph structure learning** More recent graph structure learning methods aim to learn a clean graph structure from the given noisy graph so that the subsequent GNNs trained on this learned clean graph can obtain better performance. In GSSL, however, there is no given graph structure, and we need to learn the graph structure based on the node features only. Therefore, it is a more challenging task compared to graph structure learning. **Therefore, we do not compare our method with other graph structure learning methods since their settings and goals are slightly different. We leave the investigation of graph structure learning for GSSL as future work since it is currently out of the scope of this work.** ## 3. **Short conclusion** We will elaborate on the conclusion section. More future work discussion in Appendix H will be added. ## 4. **Discussion on the computational complexity** Thanks for your suggestion. **Appendix G.5.1 gives a formal analysis of the time complexity, and Appendix G.5.2 presents a running time comparison with other baselines.** These results show our proposed method is efficient. ## 5. **A discussion on potential real-world applications** Thanks for your suggestion. **The potential impact of the proposed method with some real-world applications, like social media and web services, can be found in Appendix I.** ## 6. **Limitations of the proposed method** Thanks for your suggestion. **We discuss some limitations of the proposed method with future work in Appendix H**. For instance, even though BAGL is quite efficient in terms of convergence rate, it may still have computational issues when dealing with extremely large-scale datasets with billions of samples. We plan to reduce the time spent on finishing one iteration during the optimization to solve this issue in future work. ## 7. **Robustness analysis of the proposed method** Thanks for your suggestion. **We include a theoretical analysis of the proposed method in terms of the robustness interpretation in Appendix D.1.** Intuitively speaking, the proposed BAGL method can guarantee that the hidden ground-truth distribution of the sample feature will be contained in an introduced ambiguity set. Even if there exists some noise in the observed features, we can still recover the ground-truth distribution as long as the number of node feature channels is sufficiently large. --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, after reading the other reviews and some related works, I found that the novelty of this work is actually limited compared with some existing works. Sorry, I will lower my score to borderline reject. --- Reply to Comment 1.1.1: Title: Response to Reviewer 1FFQ for limited novelty concerns Comment: We appreciate your recognition of the novelty in the first round of reviews. We also fully understand your concerns regarding the novelty after reading other reviewers' comments. We agree that the optimization algorithm for learning the graph weights is built on top of the phenomenal work [1]. However, according to NeurIPS 2023 reviewer guidelines [2], work that uses a novel combination of well-known techniques can be valuable! (Review Form -> Strengths and Weaknesses -> Originality) Therefore, we believe using the existing well-known method like [1] in our method can also have its originality based on the following new insights or contributions to the GSSL domain. First, [1] builds a symmetric graph without differentiating the labeled and unlabeled nodes, while ours builds an asymmetric graph that takes the different roles labeled and unlabeled nodes play, following the proposed optimal graph structure. This key difference between [1] and our method directly leads to superior empirical performance in ours compared to [1]. Table 2 supports this claim, and our method BAGL outperforms the method SGL used in [1] by large margins. Second, even though the formulated optimization problems in [1] and our method (when focusing on one block) are similar (Eq.(5)(6) v.s. Eq.(17) in [1]), the optimization algorithms used to solve the respective problems are totally different. [1] uses the off-the-shelf primal-dual algorithm, while ours cleverly applies the FISTA algorithm to the dual formulation of the original problem. This key difference leads to the next different properties of the two methods. Third, [1] does not even have a convergence rate guarantee, while ours enjoys the global sub-linear convergence rate guarantee, which is the SOTA result in the GSSL domain. The key difference comes from the optimization algorithm we use, which is also novel, utilizing the power of the FISTA algorithm. Fourth, we also give the interpretation of our method from the aspect of robustness (Appendix D.1) and generalization bound(Appendix D.2), while [1] does not have these theoretical interpretations. All these interpretation of our method is also novel and show the theoretical advantages of our method. To sum up, we believe that our method has novel contributions to the GSSL field even though it is partially developed based on [1]. [1] Kalofolias, Vassilis. "How to learn a graph from smooth signals." Artificial intelligence and statistics. PMLR, 2016. [2] https://neurips.cc/Conferences/2023/ReviewerGuidelines **We sincerely hope that the reviewer can re-evaluate the novelty of our work based on our response.**
Summary: The paper proposes a method for graph construction stage of graph based semi-supervised learning. They further evaluate their method with experimental results. Strengths: The authors present strong theoretical results. Weaknesses: The experimental results for the proposed method in Table 2 are only marginally better than the baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work! We highly appreciate your feedback! **We did a statistical significance test in the experiment section**. Specifically, we perform the Friedman test with the Bonferroni-Dunn post hoc test for statistical significance analysis. Figure 2 illustrates the critical difference (CD) diagram on accuracy, where the average rank is marked along the axis with lower (better) ranks to the left. If the average rank difference between two models is greater than one CD, the relative performance is believed to be different. **Accordingly, our proposed method BAGL significantly outperforms all other baselines by a large margin.** Please refer to Figure 2 for more details.
Summary: This paper proposes an efficient and effective method for constructing affinity graphs in Graph-based Semi-supervised Learning (GSSL), with a focus on the distinct roles of labeled and unlabeled nodes. The authors present a formulation for the GSSL problem, comprising two steps: graph construction and label inference. They investigate the optimal construction of the affinity graph in the first phase to facilitate enhanced performance in the second label inference phase. The paper offers four main contributions: a succinct definition of the optimality of the affinity graph in GSSL, a block-wise graph learning framework to infer the weights in the optimal graph structure, proof of a global sub-linear convergence rate for the proposed method, and extensive experiments on synthetic and real-world datasets to demonstrate the effectiveness and efficiency of the proposed method. Strengths: Originality: The paper presents a novel approach to constructing affinity graphs in GSSL, with a focus on the distinct roles of labeled and unlabeled nodes. The proposed method is based on an asymmetric structure and a block-wise graph learning framework, which are different from existing methods. The paper also offers a succinct definition of the optimality of the affinity graph in GSSL, which is a unique contribution to the field. Overall, the paper is highly original in its approach to graph construction in GSSL. Quality: The paper is well-written and presents a rigorous derivation of the proposed method. The authors provide a clear explanation of the problem formulation and the proposed solution, as well as a detailed analysis of the benefits of the proposed method. The experiments are extensive and well-designed, with results that demonstrate the effectiveness and efficiency of the proposed method. The paper also includes a thorough review of related work, which adds to the quality of the paper. Overall, the paper is of high quality. Clarity: The paper is well-organized and easy to follow. The authors provide clear explanations of the concepts and methods used in the paper, and the figures and tables are well-designed and easy to understand. The paper also includes a summary of the contributions and a conclusion that summarizes the main findings. Overall, the paper is highly clear and well-presented. Significance: The paper makes a significant contribution to the field of GSSL by proposing an efficient and effective method for constructing affinity graphs. The proposed method is based on an asymmetric structure and a block-wise graph learning framework, which are different from existing methods. The paper also offers a succinct definition of the optimality of the affinity graph in GSSL, which is a unique contribution to the field. The experiments demonstrate the effectiveness and efficiency of the proposed method, which has the potential to improve the performance of GSSL algorithms in a wide range of applications. Overall, the paper is highly significant in its contribution to the field of GSSL. Weaknesses: 1. Lack of comparison with more recent state-of-the-art methods: While the paper compares the proposed method with several existing methods, some of these methods are relatively old and may not represent the current state-of-the-art in GSSL. It would be useful to compare the proposed method with more recent methods to provide a more comprehensive evaluation. 2. Limited discussion of the limitations of the proposed method: While the paper discusses the benefits of the proposed method, there is limited discussion of its limitations. It would be useful to discuss the situations in which the proposed method may not be effective and to provide guidance on when to use the proposed method versus other methods. 3. Lack of real-world applications: While the paper includes experiments on synthetic and real-world datasets, there is limited discussion of real-world applications of the proposed method. It would be useful to provide examples of how the proposed method could be applied in real-world scenarios and to discuss the potential impact of the proposed method on these applications. Overall, while the paper presents a novel approach to constructing affinity graphs in GSSL, there are some weaknesses that could be addressed to improve the paper's impact and relevance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: How does the proposed method compare to more recent state-of-the-art methods in GSSL? Can you provide more guidance on when to use the proposed method versus other methods? Can you provide examples of how the proposed method could be applied in real-world scenarios? Suggestions: 1. Consider comparing the proposed method with more recent state-of-the-art methods in GSSL to provide a more comprehensive evaluation. 2. Provide more discussion of the limitations of the proposed method and when it may not be effective. 3. Provide examples of how the proposed method could be applied in real-world scenarios to demonstrate its potential impact and relevance. 4. Consider including a discussion of the computational complexity of the proposed method and how it compares to other methods. 5. Consider including a more detailed explanation of the block-wise graph learning framework used in the proposed method to help readers better understand the approach. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not explicitly discuss the limitations and potential negative societal impact of the proposed method. While the paper does discuss the benefits of the proposed method, it does not provide a comprehensive discussion of its limitations or potential negative consequences. It is important for authors to consider the potential limitations and negative consequences of their work, as this can help to ensure that the benefits of the work outweigh any potential negative impacts. In particular, authors should consider the ethical implications of their work and how it may impact society as a whole. Therefore, it can be said that the authors have not adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very thoughtful and constructive review. We appreciate the recognition of our optimal graph construction approach in GSSL. We are glad to know that you find our paper novel, well-written, rigorous, well-organized, and making a significant contribution to the field of GSSL. We would also like to thank you for your suggestions for improvement and have addressed each of your points below. We hope these responses will address your concerns appropriately. ## 1. **SOTA methods comparison** Thanks for your suggestion. **We chose some old graph construction methods in GSSL, like kNN and RBF, mainly because they are still the most simple yet quite effective methods, and they are included in relevant baseline papers as well.** The investigation of the graph construction step in GSSL has been overlooked for quite a long time, and this is why only a few recent SOTA methods are coming out in the past few years. However, **we do include GraphEBM [1], published in 2020, and BCAN [2], published in 2022, as the most recent SOTA methods as baselines.** We believe most of the recent important SOTA methods on graph construction methods in GSSL are covered in our experiments. We also welcome suggestions on other SOTA methods on graph construction steps for GSSL in recent years with references. [1] Zhijie Chen, Hongtai Cao, and Kevin Chen-Chuan Chang. Graphebm: Energy-based graph 371 construction for semi-supervised learning. In ICDM, pages 62–71. IEEE, 2020. [2] Zhen Wang, Long Zhang, Rong Wang, Feiping Nie, and Xuelong Li. Semi-supervised learning 445 via bipartite graph construction with adaptive neighbors. IEEE Transactions on Knowledge and 446 Data Engineering, pages 1–1, 2022. ## 2. **Discussion of the limitations** Thanks for your suggestion. **We actually include the discussions of the limitations in Appendix H.** For instance, one limitation of BAGL is it is only suitable for the transductive setting, and it may still have computational issues when dealing with extremely large-scale datasets with billions of samples. We leave the investigation of these issues as future work since it is out of the scope of this paper. We will add more discussion of limitations in the main body. ## 3. **Lack of real-world applications** Thanks for your suggestion. In fact, **we indeed run the experiments on some open real-world datasets, and the task is image classification. We assume this is an example of a real-world application.** Please refer to Appendix F.1 for the details of the real-world datasets. The experimental results show how the proposed method could be applied in this real-world scenario. **The potential impact of the proposed method can be found in Appendix I, with a focus on social benefits**. ## 4. **A discussion of the computational complexity** Thanks for your suggestion. **Appendix G.5.1 gives a formal analysis of the time complexity, and Appendix G.5.2 presents a running time comparison with other baselines.** These results show our proposed method is efficient. ## 5. **A more detailed explanation of the proposed method** Thanks for your suggestion. For the block-wise graph learning framework, we build on top of the well-known method [1]. We learn each block in the optimal graph structure via a similar method in [1] but with quite different optimization algorithms. We apply the FISTA algorithm to the dual formulation of the proposed learning framework. **We will polish the text to make the paper easier to understand.** [1] Kalofolias, Vassilis. "How to learn a graph from smooth signals." Artificial intelligence and statistics. PMLR, 2016.
Summary: This paper proposes a novel methodology for graph-based semi-supervised learning by leveraging a asymetric graph construction technique. The main contribution of the paper is the design of a block-wise graph learning framework to estimate the weights of a graph. Strengths: The main strengths of the paper are: - The derivation of the structure of the optimal affinity graph - The derivation of an optimization algorithm for the implementation of the block-wise graph learning algorithm - The thorough experimental evaluation to assess the performance of the proposed algorithm in different scenarios Weaknesses: - While the derivation of the optimal affinity graph is novel, the optimization algorithm for learning the graph weights are heavily influenced by prior works, and therefore not much novel. - The plots (c) and (d) in Figure 3 are not as helpful as the x-axis represent number of iterations where instead the authors should use computational time. Therefore, from Figures 3c and 3d we cannot conclude much about the computational efficiency of the proposed algorithm. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - I am not convinced regarding the novelty of the framework. Can the authors elaborate how the development of section 3.2 is different from the work of [23] apart from the well-known application of the FISTA step? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors properly addressed the limitations of their proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed review. We appreciate the recognition of our derivation of the optimal affinity graph with thorough experimental evaluation. We would like to thank you for your suggestions for improvement and have addressed each of your points below. We hope these responses will address your concerns appropriately. ## 1. **Novelty** We appreciate your recognition of the novelty in the derivation of the optimal affinity graph in our method, confirmed by other reviewers as well. We also fully understand your concerns regarding the novelty of the optimization part or implementation of the proposed BAGL algorithm. **We agree that the optimization algorithm for learning the graph weights is built on top of the phenomenal work [1]. However, according to NeurIPS 2023 reviewer guidelines [2], work that uses a novel combination of well-known techniques can be valuable! (Review Form -> Strengths and Weaknesses -> Originality) Therefore, we believe using the existing well-known method like [1] in our method can also have its originality based on the following new insights or contributions to the GSSL domain.** First, **[1] builds a symmetric graph without differentiating the labeled and unlabeled nodes, while ours builds an asymmetric graph that takes the different roles labeled and unlabeled nodes play, following the proposed optimal graph structure.** This key difference between [1] and our method directly leads to superior empirical performance in ours compared to [1]. Table 2 supports this claim, and **our method BAGL outperforms the method SGL used in [1] by large margins.** Second, even though the formulated optimization problems in [1] and our method (when focusing on one block) are similar (Eq.(5)(6) v.s. Eq.(17) in [1]), **the optimization algorithms used to solve the respective problems are totally different.** [1] uses the off-the-shelf primal-dual algorithm, while ours cleverly applies the FISTA algorithm to the dual formulation of the original problem. This key difference leads to the next different properties of the two methods. Third, **[1] does not even have a convergence rate guarantee, while ours enjoys the global sub-linear convergence rate guarantee, which is the SOTA result in the GSSL domain.** The key difference comes from the optimization algorithm we use, which is also novel, utilizing the power of the FISTA algorithm. Fourth, **we also give the interpretation of our method from the aspect of robustness (Appendix D.1) and generalization bound(Appendix D.2), while [1] does not have these theoretical interpretations.** All these interpretation of our method is also novel and show the theoretical advantages of our method. To sum up, we believe that our method has novel contributions to the GSSL field even though it is partially developed based on [1]. [1] Kalofolias, Vassilis. "How to learn a graph from smooth signals." Artificial intelligence and statistics. PMLR, 2016. [2] https://neurips.cc/Conferences/2023/ReviewerGuidelines Further, **other reviewers also appreciate the novelty of our work**. Reviewer HpMF says, "The paper presents a novel approach to constructing affinity graphs in GSSL, with a focus on the distinct roles of labeled and unlabeled nodes." Reviewer 1FFQ comments, "Even though this method is based on the well-known FISTA optimization algorithm, it is applied to the dual problem, which is also new since the convergence rate will be affected compared to the one applied to the primal problem directly." **We sincerely hope that the reviewer can re-evaluate the novelty of our work based on our response.** ## 2. **Regarding Figure 3** We understand your concerns related to Figure 3. In fact, **Figure 3 is not for the computational efficiency comparison of different methods but for the convergence rate comparison of different methods. We do not cover computational efficiency in Sec. 4.3.2, and we only focus on the convergence rate here.** In fact, when we compare the convergence rates of different algorithms in the optimization domain, we usually set the x-axis as the number of iterations, instead of the actual running time. In this way, we can easily spot how quickly each algorithm approaches its limits. One of the most direct ways to measure this is by observing how the solution improves (e.g., how the $l_2$ distance between the current solution and the limit solution decreases) with each successive iteration. Please refer to the axes of Figure 3(c)(d) for details. In this way, **setting the x-axis as the number of iterations can better align with the definition of convergence rate**. It's easy to see that fewer iterations to achieve a similar error is generally better. However, it's important to note that the computational cost of one iteration can differ dramatically between algorithms. Therefore, it may not always be a direct measure of the computational effort. Therefore, your suggestion for computational time comparison is insightful. In fact, **we have already included a computational efficiency comparison in Appendix G.5.2** titled "Running time comparison." Now in this section, we indeed compare the actual running time of each method. **Table 7 shows that our method is quite efficient compared with other optimization-based graph construction methods**. ## 3. **Final remarks** We sincerely respect and appreciate the time and effort you have dedicated to reviewing our work. We understand that every work should undergo meticulous scrutiny to ensure the highest standards, and we truly value your feedback. However, **we humbly hope you can reconsider certain aspects of our work that we believe it possesses significant novelty and includes computational time comparison in Appendix.** If there are specific areas of ambiguity or contention, we are willing to address them with further clarification. We kindly ask for an opportunity to emphasize the potential impact our work can bring to the broader this community. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. First of all, the question I asked was straightforward and I was expecting a direct-to-the-point answer rather than a lengthy essay with bold-face quotations all over. Based on that and on other’s reviewers views about how the current paper falls short to clearly explain how’s it different from [23], I’ll maintain my score as is. --- Reply to Comment 1.1.1: Comment: Thanks for your suggestion! We format our response into a direct-to-the-point answer without bold text. ### 1. Novelty We list the major differences between our work and [23], showing the novelty and contribution of our work. We will add a detailed discussion of their differences. | | Ours | [23] | |:----------------------------------------:|:-----------------------:|:---------------------:| | Graph structure | Asymmetric graph | Symmetric graph | | Label information | Use | Not use | | Optimal for label inference step in GSSL | Yes | No | | Optimization algorithm | FISTA on dual problem | Primal-dual algorithm | | Convergence rate | Globally sub-linear | No guarantee | | Time complexity per iteration | $O(N_u \times N_l + N)$ | $O(N \times N)$ | | Generalization bond improvement | Yes | No | Besides, NeurIPS 2023 reviewer guidelines indicate work that uses a novel combination of well-known techniques can be valuable. ### 2. Figure 3 Figure 3 is for convergence rate comparison, so we set the x-axis as the number of iterations. The running time comparison can be found in Appendix G.5.2. ### 3. Final remarks We sincerely hope the reviewer can re-evaluate the novelty of our work based on our compact response and the latest reviews from other reviewers! --- Rebuttal 2: Title: Please provide additional feedback Comment: Hi, Could you please acknowledge that you have read the rebuttal and let the reviewers know if you still have any concerns or not?
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors proposes to solve graph-based semi-supervised learning (GSSL) problems by first finding the "optimal graph" for SSL. The optimal graph has edges only from labeled to unlabeled nodes, or between unlabeled nodes. These edge weights are computed through FISTA algorithm in the dual space, and theoretical guarantees are provided for sub-linear convergence rates. Experiments are conducted using both synthetic and real-world datasets. Strengths: This paper is very well written and nicely presented. The clear writing made it easy to follow. I did not carefully read through the proofs in the appendix, but the motivation for the framework and derivation of the algorithm seem correct. I appreciate that this paper did solid work on all aspects - problem formulation, clever optimization algorithm, theoretical convergence analysis, and numerical experiments. Weaknesses: Please see the questions below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. When comparing methods such as RBF and KNN to BAGL, were RBF and KNN used to also generate asymmetric (and not symmetric) graphs? This is not made immediately clear in the text, and I'm wondering if the superior performance mainly comes from keeping $W_{lu}, W_{ll}$ all zero matrices. 2. Figure 1 c is visually appealing, but I'm having trouble understanding why that solution is any better or more probable than the RBF solution in 1b. 3. There's a mismatch between appendix section labels and their references in the main text. This should be fixed since the appendix includes important time complexity and runtime analysis. 4. How does BAGL react to class imbalance, either in the labeled nodes or in the entire classification problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very thoughtful and constructive review. We appreciate the recognition of our optimal graph construction approach in GSSL. We are glad to know that you find our paper solid in all aspects - problem formulation, clever optimization algorithm, theoretical convergence analysis, and numerical experiments. We would also like to thank you for your suggestions for improvement and have addressed each of your points below. We hope these responses will address your concerns appropriately. ## 1. Keeping $W_{lu}$ and $W_{ll}$ zero matrices. This suggestion is quite interesting and valuable! In fact, when comparing existing graph construction methods like kNN or RBF with our proposed method BAGL (Table 2), we still stick to the original symmetric graph structure in these baselines and do not change the constructed graphs in these baselines into the same optimal asymmetric graph structure as the one used in our proposed method BAGL. **We choose not to do so, mainly because we strictly follow the original graph structures used in these baselines, which are all symmetric graphs, for a fair comparison with our method.** Regarding the question of whether the superior performance mainly comes from keeping $W_{lu}, W_{ll}$ all zero matrices, **we actually did an ablation study to investigate the influence of keeping $W_{lu}, W_{ll}$ all zero matrices for the proposed BAGL method in Table 3 in Sec. 4.3.3**. If we remove the constraints of $W_{lu} = \mathbf{O}, W_{ll} = \mathbf{O}$, we find that there is a most significant performance drop compared with other variants of BAGL in Table 3. **This result shows that the proposed optimal asymmetric graph structure contributes most to the success of BAGL.** However, we agree that this proposed asymmetric optimal graph structure can be easily incorporated into existing graph construction methods like kNN and RBF. Therefore, we add another experiment where we also convert the graphs constructed by existing baselines into the same proposed asymmetric optimal graph structure via setting $W_{lu} = \mathbf{O}, W_{ll} = \mathbf{O}$. The results are as follows. | | RBF | kNN | SGL | RGCLI | AGR | GraphEBM | BAGL | |:--------------------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:--------:|:-----:| | w/o $W_{lu},W_{ll} = \mathbf{O}$ | 97.46 | 86.59 | 94.68 | 88.24 | 97.63 | 95.13 | 95.71 | | w/ $W_{lu},W_{ll} = \mathbf{O}$ | 97.51 | 87.66 | 96.72 | 89.31 | 95.40 | 95.21 | 97.88 | | improvement (%) | 0.05 | 1.23 | 2.15 | 1.21 | -2.28 | 0.08 | 2.26 | We can have some observations from this table. The proposed optimal asymmetric graph structure via setting $W_{lu},W_{ll} = \mathbf{O}$ does have some positive effects on almost all graph construction methods in GSSL. These empirical findings also support the theoretical motivations in Sec 3.1.2. Also, some optimization-based methods like SGL and BAGL seem to benefit more from this optimal asymmetric graph structure compared to some other baselines like kNN. Maybe this is because these methods tend to handle the optimization problem in Eq.(3) from the optimization perspective directly or indirectly, aligned with the merits of the derivation of why we set $W_{lu},W_{ll} = \mathbf{O}$ in Proposition 1. Besides, we have to say that this optimal asymmetric is not a universal positive strategy for all graph construction baselines. It has negative effects on AGR. We suspect setting $W_{lu},W_{ll} = \mathbf{O}$ may break the anchor node connections in AGR, thus leading to sub-optimal performance. **In summary, even though the proposed optimal structure is quite simple and may be incorporated into many existing graph construction baselines, we can only get the most significant performance improvement when used in optimization-based methods like our proposed method BAGL.** ## 2. Figure 1. A simple but not quite accurate intuition for the superiority of BAGL over RBF (Figure 1) is as follows. BAGL inexplicitly uses the label information via the optimal graph structure (setting $W_{lu},W_{ll} = \mathbf{O}$). Therefore, it is easier for BAGL to learn that the given red and the green labeled point (Figure 1a) are from two different classes or clusters (one is a ring-like cluster, and the other is a dense one). But RBF is only based on the distance of the points without any label information; thus, its performance is worse. ## 3. Mismatch between main and appendix. Thanks for spotting this mismatch. We will fix this issue. ## 4. Class imbalance We handcraft a label-imbalanced version of the ORHD dataset, where the labeled nodes are sampled until the overall imbalance ratio (max(#labeled class nodes)/ min(#labeled class nodes)) reaches 20. The results of BAGL with different subsequent label inference algorithms are as follows. | | GRF | LGC | GCN | |:------------------------:|:-----:|:-----:|:-----:| | Balanced dataset (Acc) | 97.88 | 98.04 | 98.15 | | Imbalanced dataset (Acc) | 94.75 | 95.30 | 96.28 | | Performance Drop (%) | 3.19 | 2.79 | 1.90 | **We can see that the imbalance in label classes can lead to a great performance drop in BAGL. Because BAGL only uses the information of whether the node is labeled or not in the graph construction step, instead of the information of the exact label of the node. This makes the BAGL unaware of the labeled class imbalance issue during training.** We will add this limitation of BAGL. However, from the results, we can see that the subsequent label inference algorithms also react to the labeled class imbalance case differently (say, GCN is more robust.) Therefore, we leave the investigation of graph construction methods that are also robust to label class imbalance as future work since it is out of the scope of this work. Thanks for your insights on this future direction. We will continue to conduct research on this new problem setting. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experimental results. Overall, I am happy with the paper and the authors' rebuttal comments. However, it seems that the details in the appendix is *absolutely necessary* to address many of the reviewers' and future readers' concerns; in fact, I had some of those myself before I looked through the appendix. I'll keep my original score, but it is possible that a journal that allows for more thorough and longer manuscripts would be a better fit for this paper. Based on the reviewers' comments, at least a more detailed discussion on why this proposed work is different from [23], along with discussion of the proposed method's limitations, should be added to the main text. --- Reply to Comment 1.1.1: Comment: Thanks again for your strong support and insightful suggestions for our work! We will move some contents from the appendix to the main body so that the readers will have a better understanding of our work. More importantly, we will add a detailed discussion on the difference between [23] and our work, along with the limitations of our work. We will definitely consider extending this work to a journal as your valuable suggestions!
null
null
null
null
null
null
Language-driven Scene Synthesis using Multi-conditional Diffusion Model
Accept (poster)
Summary: This paper approaches the task of predicting a location and orientation of furniture, conditioned upon a person’s motion sequence, existing furniture, and text. By conditioning on text, which prior work does not do, the proposed method enables users to actively specify furniture location. In addition, experiments show it enables object replacement, shape alteration and displacement. This work introduces captions to prior human motion-furniture dataset PROXD, and shows SOTA performance given captions. Strengths: This paper adds a useful contribution to the task of human-guided scene layout - Prior work generates furniture from motion. This work enables users to specify location using text, making it much more applicable to real-world scenarios - Gathering captions to a standard dataset PROXD enables the proposed method to significantly outperform prior work across metrics. The paper contains good analysis of text prompts in Supplemental. - The method also can leverage text from HUMANISE, again enabling significant improvement - The approach also enables object editing, which is experimentally evaluated The method proposes the intuitive approach of “guiding points” to this conditional 3D diffusion task, which it shows is highly effective in experiments. - The idea of conditioning on a weighted combination of predicted locations from each conditioning component is intuitively more powerful than conditioning on latent encodings - The paper provides theoretical guarantees guiding points explicitly contribute to denoising - Experiments show conditioning upon guiding points is more effective than translation vectors alone, or unconditional - Experiments also show correlation between accuracy of guiding points and final performance, empirically confirming theoretical findings. Weaknesses: Edit: after rebuttal, my concerns are well-addressed. There are several missing details and comparisons that make full assessment of the paper challenging. This includes missing limitations. - I do not understand how the training works for scene synthesis, which makes it hard to fully assess the importance of guiding points. My understanding is the LSDM denoises a point cloud given the output of the guiding points network. I assume then, the guiding points network is trained jointly, end to end with the LSDM, at each denoising step? And that no networks are pretrained, including text encoder (I’d assume this is pretrained)? This would mean S is not actually trained to predict final position, but rather consists more of geometric features? Based on Figure 6, it is hard to determine if S is directly supervised. - It feels like another reasonable design choice would be to initialize diffusion with guiding points, and denoise these, as opposed to (or in addition to) conditioning upon them. Testing this choice could perhaps more directly validate the theoretical findings that using S specifically for conditioning is helpful. - In qualitative results, a single human location is used. However, the proposed task is to consider a vector of human locations (“motion”). How does text work given the input is not a single location, but a set of locations? Text is sensitive to location e.g. “Place a desk in front of me”. Is time assumed to be the last timestep? In this case, does the dataset generate text descriptions based only on the last location of the human? In reality, I would imagine users would like to specify text based on any number of timesteps throughout the trajectory e.g. “place the desk in front of me [at frame i<N of N]” - The comparison to multi-conditional modeling is not fully satisfying. The method compares to itself without F, but otherwise keeps the same geometric-based architecture. Namely, it combines weights linearly using w. A more standard conditional diffusion approach would be to concatenate features or combine them through nonlinear (e.g. transformer) layers. This comparison would make the argument for the proposed method stronger. - Is there a breakdown into in-contact vs. not in-contact objects? Prior work specifically uses this; as this paper claims to outperform in not-in-contact objects, it would be a helpful metric to report. Contributions feel slightly niche (minor weakness) - The central contribution of conditioning upon a geometrically transformed linear combination of feature distances of objects and humans is a cool contribution. However, it feels specific to the task of furniture placement conditional on human motion and text. Is there a wider reason this method is important? - Saying scene synthesis has gained significant attention in the past few years and citing one paper from last year with 3 citations (L16) is not convincing the task is very important - Text-conditioned diffusion models predicting position and orientation already exist in the near subfield of human motion generation (Tevet et al. Human Motion Diffusion Model, ICLR 2023). The application in the near subfield of furniture position and orientation on its own feels like a relatively modest step. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - L4: “combing” -> combining - L51: “a new challenging” -> “a new challenge” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. I would recommend particular focus on the ability to utilize a sequence of human motion (see weaknesses), assuming this is a limitation. Others could include the assumption one knows the object of interest and further has a 3D model of it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review. **Q1: How the training works for scene synthesis? The guiding points network is trained jointly, end to end with LSDM?** >The key reason why the training works is because of our theoretical findings. Eq. (7) shows $\tilde{S}$ explicitly contributes to the denoising process, supporting the assumption that $\tilde{S}$ would also be learned concurrently with the denoising process. We trained the guiding point network end-to-end with LSDM. Only the text encoder is pretrained. **Q2: S is not actually trained to predict final position but rather consists more of geometric features? It is hard to determine if S is directly supervised.** >$\tilde{S}$ is not just simply geometry feature; it was designed to forecast the target object (please see our Appendix's Motivation). In fact, because $\tilde{S}$ is not directly supervised, we have to carefully examine whether this quantity is meaningful (L249). The theory and experiments (Corollary 1.2, Table 4, Figure 6 in the main paper, and additional failure cases in Figure 3 of the One-page PDF) further show that $\tilde{S}$ represents a good approximation for the target object. **Q3: Why not initializing diffusion with guiding points, and denoise these?** >There are three reasons why initializing diffusion with guiding points is not a reasonable design choice. First and foremost, the initial state for the denoising process is sampled from an approximation of isotropic Gaussian distribution [1, 2, 3] rather than a specific quantity; therefore, initializing the denoising process with the guiding points seems not popular. Consequently, conditioning upon guiding points seems to be a more natural approach. Second, as we have indicated in Eq. (7) that denoising will also concurrently benefit the learning of the guiding points; therefore, our end-to-end approach has a rationale behind its solid performance. Finally, training in two stages requires longer time and computational resources. **Q4: Experiment regarding i) initialize diffusion with guiding points, and denoise these and ii) comparison to multi-conditional modeling (concatenate features or combine them through nonlinear (e.g. transformer) layers.** >Following your suggestion, we have implemented the two-stage version: one stage is for supervising the guiding points, then the second stage is to utilize the pretrained guiding points network; and another multi-conditional diffusion model (MCDM) that concatenates all of the latent features extracted from the conditions (text prompt, scene entities) and pass through a transformer layer. The results show that LSDM clearly outperforms both the two-stage method and MCDM. Note that, training two-stage LSDM took about 1.5 times longer than ours. |||PRO-teXt|||HUMANISE|| |-|-|-|-|-|-|-| | Baseline|CD|EMD|F1|CD|EMD|F1| | MCDM|0.630|0.726|0.357|0.858|0.875|0.251| | Two-stage LSDM|0.562|0.621|0.437|0.744|0.806|0.353| | LSDM (ours)|**0.536**|**0.590**|**0.516**|**0.737**|**0.750**|**0.439**| **Q5: How does text work given the input is not a single location, but a set of locations?** >If the input includes a set of locations, our method will consider each location as a condition and generate new objects based on the current human location, given objects, and current text prompt condition. However, this case has not been intensively tested in our experiment due to the lack of training data following this scenario. **Q6: Is time assumed to be the last timestep? Does the dataset generate text descriptions based only on the last location of the human?** >We do not assume that text descriptions are based only on the last human location when creating dataset. Indeed, we label the dataset as your observation (L98-99 of the Appendix), and we allow the user to "specify text based on any number of timesteps throughout the trajectory e.g. place the desk in front of me [at frame i<N of N]". **Q7: Is there a breakdown into in-contact vs. not in-contact objects?** >We break down the contact and non-contact results in PRO-teXt in the table below. Our LSDM achieves better performances in both cases. |||Contact|||Non-contact|| |-|-|-|-|-|-|-| |Baseline|CD|EMD|F1|CD|EMD|F1| |ATISS|0.779|1.018|0.128|3.248|1.619|0.026| |SUMMON|0.780|1.001|0.139|3.324|1.600|0.028| |MIME|0.717|0.978|0.145|3.179|1.597|0.024| |LSDM (ours)|**0.081**|**0.433**|**0.703**|**0.915**| **0.737**|**0.471**| **Q8: Is there a wider reason this method is important?** >In terms of theory, we hope that our proposed guiding point is a general concept and may be useful for other tasks. For example, in the visual grounding task, we can predict "guiding pixels" indicating possible boxes from the conditions to guide the denoising process. In terms of application, our method can be used in animation, metaverse, gaming. **Q9: About the importance of scene synthesis task and writing in the Introduction.** >We appreciate your comments and have revised the Introduction to stress more the importance of the scene synthesis task. **Q10: Comparison with Text-conditioned diffusion models (Tevet et al).** >Tevet et al. use text as the *sole input* to generate *human motion* while we utilize *multi-condition* (text, human, objects) to generate *new objects*. We believe there is a significant difference in theory and application between two works. **Q11: About the ability to utilize a sequence of human motion?** >Our method can take input as a human motion sequence. Figure 4 in the One-page PDF shows that the sequence of human motion indeed does not bring significant improvement. >All typos have been fixed. Thanks! References: [1] Ho et al. Denoising diffusion probabilistic models. NeurIPS 2020. [2] Dhariwa et al. Diffusion models beat gans on image synthesis. NeurIPS 2021. [3] Tevel et al. Human motion diffusion model. ICLR 2023. --- Rebuttal Comment 1.1: Title: Please let us know if you have any further concerns Comment: Dear Reviewer YsUe, Thanks for your constructive efforts in the reviewing process of our paper! Let us know if you have any further concerns before the end of the discussion phase. Thanks, Authors. --- Rebuttal Comment 1.2: Title: Reviewer Response to Rebuttal Comment: After reading the rebuttal and other reviews I change my rating to 6 - weak accept. I believe the paper should be accepted as (1) I agree with the other authors adding text conditioning to human-guided scene layout is an important contribution; (2) contributions of the method, such as the interesting "guiding points" are well-defended in experiments; and finally (3) clarifying my concerns about training details, denosing design choices, etc. --- Reply to Comment 1.2.1: Title: Thanks for your reconsideration Comment: Dear Reviewer **YsUe**, We would like to express our appreciation for your thoughtful reconsideration! Best regards, Authors.
Summary: This paper focuses on language-driven scene synthesis, a new task integrating text prompts, human motions, and existing objects as multiple conditions. The proposed task is challenging as it requires a strategy for encoding the multi-modal conditions into a unified space. To solve the problem, the authors introduce a novel guiding points concept to combine multiple conditions, which can explicitly contribute to the denoising process. They also introduce three scene-editing applications based on the text prompt input. They demonstrate the approach empirically and theoretically; the intensive experiments show that the proposed approach achieves significant improvements over the state-of-the-art methods. Strengths: 1. Extending the scene synthesis to a language-driven setting, incorporating text prompts and human motions as input, holds great promise and significance in bridging the gap between research and real-world applications. It also enables downstream real-world scene editing applications. 2. This paper proposes a somewhat novel method to handle such a multi-conditional setting. The authors introduce guiding points that explicitly guide the reverse process of the diffusion model, offering a departure from the implicit unification approach used in previous multi-conditional diffusion models. 3. This paper's theoretical demonstration and experimental analysis are comprehensive, especially the ablative experiment, which demonstrates the impact of different modalities and how the proposed modules contribute to the overall performance. Weaknesses: 1. In comparing MIME to your approach, I notice that MIME focuses on generating 3D scenes based on 3D human motion, whereas your method takes a human pose as input. Considering this distinction, is it fair to make a direct comparison between MIME and your approach? 2. It has come to my attention that Proposition 2 and Corollary 2.1 are included in section 3.2. However, it may be more suitable to relocate them to the supplementary materials. I am uncertain about the significance of these components within the section, and it seems that the author included them primarily to showcase their expertise. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you please elaborate on the methodology used to convert the point cloud into an object mesh shown in your figures and video? 2. I'm curious about the performance of directly learning the unification of multiple conditions with a diffusion model. It appears that the proposed method outperforms others, but what about the performance gap between these two methods? 3. Based on my comprehension, does your model exclusively utilize a single-frame human pose as input rather than a motion sequence? If that is the case, I'm curious about the rationale behind this choice, considering that a motion sequence could potentially provide more comprehensive information about the scene distribution. By the way, how do you represent the human body? Minor fixes: - The video in the supplementary material is excellent, but you can provide some visualization of raw point cloud results. - Authors are advised to provide a limitation and future work section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not discussed this paper's limitations and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful review. **Q1: As MIME utilizes a sequence of human motions, your method only utilizes a single-frame motion. Does your model exclusively utilize a single-frame human pose as input rather than a motion sequence?** >Yes. We believe this distinction is minor in our problem settings because the target objects are conditional constrained only on the *moment* the user commands, and they little depend on past motions of the human. Therefore, the final human pose before executing the command gives adequate information to complete the task. Thus, no unfair comparison was taking place though the input may be different. **Q2: Is it fair to make a direct comparison between MIME and your approach?** >We remark that our model can take either single-frame human pose or multi-frame human pose. We have also included a study regarding the impact of the number of frames on scene synthesis results in Figure 4 of the One-page PDF. The experimental results indicate that the performance of our LSDM varies little when taking different number of human pose frames as the input. Consequently, using one frame is enough for our network. In addition, we also implement a variation of MIME with text prompt by concatenating CLIP text encoder features with the latent features at the transformer layer of MIME's architecture. The result is as follows. |||PRO-teXt|||HUMANISE|| |-|-|-|-|-|-|-| |Baseline|CD| EMD|F1|CD|EMD|F1| |MIME|2.0493 |1.3832|0.0990|5.4259|2.0837|0.0628| |MIME with text|1.8424|1.2865|0.1032|4.7035|1.8201|0.0849| |LSDM (ours)|**0.5365** |**0.5906**|**0.5160**|**0.7379**|**0.7505**|**0.4395**| **Q3: What is the rationale behind this choice?** >There are two key reasons why we only take a single-frame human pose as input. First, the dependency between text prompts and human motions is temporal, i.e., the semantics of placing objects depends on one and only one frame. In fact, if there is a text prompt "Place a table in front of me" and multiple frames, it would be ambiguous to determine which moment the table refers to. Second, the human pose frame referenced to the prompt gives adequate information, which is indicated in Section 3 of our Appendix and Figure 4 in our One-page PDF. **Q4: It has come to my attention that Proposition 2 and Corollary 2.1 are included in section 3.2. However, it may be more suitable to relocate them to the supplementary materials.** >Thank you for your suggestion. As suggested by you and Reviewer **uwsL**, we have shortened Remark 1.2 and moved Proposition 2 + Corollary 2.1 to the Appendix. **Q5: Significance of Proposition 2 and Corollary 2.1?** >Corollary 2.1 provides a reliable measurement for evaluating guiding points. L152 indicates an important observation; that is, a smaller MSE between the predicted guiding points $\tilde{S}$ and $\mu_0$ corresponds to a more accurate estimation. This observation gives sufficient evidence to conclude from Table 4 that our guiding points are meaningful. **Q6: Could you please elaborate on the methodology used to convert the point cloud into an object mesh shown in your figures and video?** >We utilize the object recovery algorithm in [1] (Line 212 in our main paper). The key idea is to iterate through all possible objects and then determine the one that aligns the most with the point cloud. **Q7: Performance of directly learning the unification of multiple conditions with a diffusion model?** >We have implemented another multi-conditional diffusion model (MCDM) to directly unify all the conditions and pass the latent features through a transformer layer. The performance of this latent mechanism still lags behind our LSDM's. The results are reported in the following table. | | | PRO-teXt | | | HUMANISE | | |-------------|------------|------------|------------|------------|------------|------------| | Baseline | CD | EMD | F1 | CD | EMD | F1 | | MCDM | 0.6308 | 0.7269 | 0.3579 | 0.8583 | 0.8757 | 0.2505 | | LSDM (ours) | **0.5365** | **0.5906** | **0.5160** | **0.7379** | **0.7505** | **0.4395** | **Q8: How do you represent the human body?** >We use point cloud or SMPL models as in several works, e.g., [1, 2, 3] (Line 179 in our Appendix). For the HUMANISE dataset, we use SMPL models to represent humans. For the PRO-teXt dataset, the input for human motion is a 3D point cloud, which we follow the practice by [1]. **Q9: You can provide some visualization of raw point cloud results.** >We provide the visualization in Figure 1 of our attached One-page PDF. Please see the attached file. **Q10: Authors are advised to provide a limitation and future work section** >Thank you for your comments. The limitation of our method is that the theoretical findings have an assumption constrained to uniform data like point clouds. The predicted guiding points are not always aligned with the target object, which is indicated in the results of Tables 1 and 4 of the main paper. Furthermore, the editing results show necessary improvements in future works. The broader impact of our paper lies in the potential applications in VR, animation, and metaverse. We have included these details in our paper. References: [1] Ye et al. Scene synthesis from human motion. In SIGGRAPH Asia 2022. [2] Kocabas et al. PARE: Part attention regressor for 3D human body estimation. In ICCV 2021. [3] Rempe et al.. Humor: 3d human motion model for robust pose estimation. In ICCV 2021. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: Thanks for your response. I have no further questions. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for your comments!
Summary: This paper targets to generate 3d scene by conditioning on text prompts and other inputs, e.g., room layouts. For this purpose, they operate on 3d point cloud representation and propose a multi-conditional diffusion model to generate guiding points to achieve 3d scene synthesis purpose. The experiments are evaluated on synthetic indoor dataset. Strengths: - Adopting human pose into 3d scene generation process is a novel condition to consider during generation. Weaknesses: - The authors did not motivate well on why a diffusion model is necessary or better for this task. Given the large amount of prior work in scene layout, is there any advantage of diffusion model, such that it can do something prior method cannot? - Their dataset is too simple. On one hand, the authors deal with 3D point cloud representation, which usually noisy and sparse in real-world scanning data. On the other hand, they only test the solution on synthetic dataset, which seems to be in a different distribution with real-world scan. An evaluation on real-world dataset can make the world more solid. - Their video supplementary is confusing in terms of what kind of application that are aiming at. Is the audio cut off accidentally in the mp4? As for the application, is that the authors hope to leverage human pose to generate 3d objects in the indoor scene? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could you justify more on the method choice of diffusion model in this task? 2. The citation format is not the one suggested by NeurIPS 2023. Please check submission website to adjust in later revision. 3. What application is the proposed technique aiming to realize? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: good. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your valuable feedback and thoughtful review. Please see below our responses and let us know if you have any further questions. **Q1: Could you justify more on the method choice of diffusion model in this task?** >There are three reasons to leverage the diffusion model in our paper. First, prior methods in the literature *do not consider a comprehensive set of conditions* as ours, for example, some of them do not consider text prompt while others do not consider human motion. Second, diffusion models are exceptional in terms of conditional generation, which have been observed in many successes such as [1], [2]. Third, point clouds can be viewed as particles in a thermodynamic system [3], therefore, it is natural to apply diffusion probabilistic models [4]. **Q2: Is there any advantage of diffusion model, such that it can do something prior method cannot?** >The key advantage we take from diffusion models is their strong *guidance* ability on given conditions. We justify in our paper that the guidance of our method is theoretically supported (in Remark 1.2 of section 3.2). As the object space is sparse and agnostic, a guidance strategy (in our case, the guiding point network) would give prior information about the possible span and shape of the target object, effectively guiding the network to diffuse the rest. We further remark that in [5], the authors establish a more robust baseline with a text-guidance strategy than other state-of-the-art generative models (including GAN). **Q3: Their dataset is too simple and they only test the solution on synthetic dataset.** >The datasets we used currently are the most recent datasets in this field. Recent work, such as ATISS, SUMMON, and MIME, also only utilized synthetic datasets, not real-world environments. Our considered datasets are based on HUMANISE and PROXD, which are also widely studied in this field [6]. We agree with you that testing more real-world datasets would be more meaningful; however, we do need to wait for such a feasible dataset. **Q4: Their video supplementary is confusing in terms of what kind of application that are aiming at and what application is the proposed technique aiming to realize?** >Our problem has the potential to apply to character animation or metaverse, where embodied agents can interact and give commands to generate objects that are aligned with the scene's spatial arrangement and user preferences. We have included a Broader Impact section to discuss the applications of our paper. Particularly, our technique can be applied when a user is entering an empty apartment and giving commands to automatically generate objects (e.g., "putting a table next to the bed", "placing a sofa behind the chair") to arrange the furniture (where physical contact is not mandatory). **Q5: Is the audio cut off accidentally in the mp4?** >Our video does not include audio in this version. **Q6: As for the application, is that the authors hope to leverage human pose to generate 3d objects in the indoor scene?** >Yes, we believe that the generation of objects conditioned on user preferences (such as text prompts and human pose) can be applied to animation, metaverse, or gaming. **Q7: About the citation format.** >Thank you for your suggestion. The citation style has been revised in our final version. References: [1] Tseng et al. Edge: Editable dance generation from music. In CVPR 2023. [2] Tevet et al. Human motion diffusion model. In ICRL 2023. [3] Luo and Hu. Diffusion probabilistic models for 3d point cloud generation. In CVPR 2021. [4] Ho et al. Denoising diffusion probabilistic models. In NeurIPS 2021. [5] Dhariwal and Nichol. Diffusion models beat gans on image synthesis. In NeurIPS 2021. [6] Yi et al. Human-aware object placement for visual environment reconstruction. In CVPR 2022. --- Rebuttal Comment 1.1: Title: Looking forward to your response Comment: Dear Reviewer RVaZ, Thanks for your endeavors in the reviewing process! Please let us know if you have any further questions before the end of the author-reviewer discussion phase. Thanks, Authors.
Summary: This paper deals with scene synthesis with human pose, room layout, and text prompts. The main architecture is a multi-conditional diffusion model, which performs progressive generation, where a new object is synthesized and conditioned on the existing scene point cloud and the language description. The key contribution is a guiding point network, which first generates a reference point cloud as a weighted sum of existing objects and human pose, then the reference point cloud is used as a condition to guide the denoising processing for a new object. The trained network allows scene generation guided by language and can produce semantically meaningful scene edits. Strengths: The progressive generation of scenes guided by language makes it much easier to interact with the synthesis process and alleviates the control burden from the designer's side. The experiments show the effectiveness of the proposed pipeline and the learning objective. The ablation study is interesting. Weaknesses: The architecture is quite intuitive, but the derivation is not quite clear and seems disconnected from what the author wants to do. Equations (2) and (3) are standard, but starting from equation (4), when the guiding point is introduced, some unsureness kicks in. For example, why do you assume that x_0 is a uniform distribution over a domain S, how do you define S in the first hand, and why x_0 be uniform is a good assumption? Also, why q(y|x_0) is non-zero uniform over S? What is the difference between S and S_hat, and what do you mean by sampling set of x_0? Why then q(x_0) becomes uniform? How do you infer q(x_0|y) is also uniform over S? how? If q(x_0) is uniform, then \mu_0 is the center of the region S? why this is a meaningful quantity in your consideration? Why is S_tilt a sampling set of S_hat? Eq. (10) is very intuitive, why do we need all the previous derivations? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and valuable feedback. **Q1: Why do you assume that $x_0$ is a uniform distribution over a domain $S$?; and Why $x_0$ be uniform is a good assumption?** >Our assumption is based on the fact that we uniformly sample the point cloud out of each object. For your follow-up question, uniform sampling is well-suited for our problem settings (3D point clouds) for three reasons. The first reason is that since we highly focus on objects' spatial arrangement, uniform-sampled point clouds are ideal for capturing the high-level geometry of the target objects [1]. Second, uniform sampling for 3D point clouds has been widely employed in previous works such as [1], [2], [3]. Third, uniform sampling is a computationally efficient strategy when compared to alternative methods; for example, uniform sampling has O(N) computational complexity, while Poisson disk sampling has O(N log N) complexity in typical cases [4]. **Q2: How do you define $S$ in the first hand?** >$S$ represents the space of the interior of the object $O_{M+1}$. We have included this definition at the beginning of section 3.2 (L120). Thank you for bringing this to our attention. **Q3: Why $q(y|x_0)$ is non-zero uniform over $S$?** >$q(y|x_0)$ is non-zero uniform over $S$ under the assumption of Remark 1.2 (L131). This assumption derives from the fact that as long as we uniformly sample $x_0$ out of $S$, $x_0$ serves as a geometry representation of the target object. The meaning of the conditions $y$ (including text prompt and other scene entities) remains unchanged as they only depend on the target object's geometry. When $x_0$ is sampled from $S$, $q(y|x_0)$ is non-zero as the alignment of the conditions with the target object is meaningful. **Q4: What is the difference between $S$ and $\hat{S}$? and What do you mean by sampling set of $x_0$?** >$\hat{S}$ is a discretized set of the continuous space $S$. Similarly, when we refer to a "sampling set of $x_0$", we mean a discretized set of $x_0$. **Q5: Why then $q(x_0)$ becomes uniform?** >$q(x_0)$ is already uniform under our assumption of Remark 1.2 (please see Line 131 of our main paper). **Q6: How do you infer $q(x_0|y)$ is also uniform over $S$?** >Thank you for pointing this out. However, upon further investigation, we found that the uniform property of $q(x_0|y)$ is not necessary for the construction of Eq. (7) and Corollary 2.1. Therefore, we have removed this sentence. **Q7: If $q(x_0)$ is uniform, then $\mu_0$ is the center of the region $S$?** >The notation $\mu_0$ in L137 is originally used to denote the predicted mean of the initial probability distribution $q(x_0)$. However, the notation $\mu_0$ in L143 is used to denote the centroid of $S$. To avoid confusion and potential misunderstandings, we have made a correction by changing the notation in L137 from $\mu_0$ to $\tilde{\mu}_0$. Regarding your question, when $q(x_0)$ is uniform, the predicted mean $\tilde{\mu}_0$ indeed represents the center of the region $S$. **Q8: Why this is a meaningful quantity in your consideration?** >The predicted mean $\tilde{\mu}_0$ is meaningful to our paper because $\tilde{\mu}_0$ is the connection between theory and our network design. In theory, $\tilde{\mu}_0$ is an estimation of $x_0$. In the motivation for the network design (section 3 of the Appendix), we show that by applying transformation matrices to scene entities, we can predict the centroid of target object, which is formulated as $\tilde{\mu}_0$ in this case. We further remark that the estimation term $\tilde{S}$ is not restricted to the design choice of $\tilde{\mu}_0$ and can be designed differently in other tasks. **Q9: Why is $\tilde{S}$ a sampling set $\hat{S}$?** >$\tilde{S}$ is defined as the sampling set of predicted $\tilde{\mu}_0$ (L138); therefore, serves as an estimation for $\hat{S}$. $\tilde{S}$ is not a sampling set of $\hat{S}$. **Q10: Eq. (10) is very intuitive, why do we need all the previous derivations?** >Eq. (10) is not adequate for our central claim that guiding points explicitly contribute to the denoising process; therefore, we need more explicit interpretation as in Eq. (7). From Eq. (7), we observe that the term $\tilde{S}$ has an explicit contribution to the denoising process, leading to the assumption for our network architecture: guiding points $\tilde{S}$ can be learned concurrently with the denoising process. Consequently, we design our network to learn guiding points with the denoising process jointly. Experiments (Table 1, Table 4, and Figure 6) confirm this assumption. We further establish Corollary 2.1 to achieve a reliable measurement for evaluating guiding points. L152 indicates an important observation for our measurements; that is, a smaller MSE between the predicted guiding points $\tilde{S}$ and $\mu_0$ corresponds to a more accurate estimation. **Q11: No limitation is discussed.** >The limitation of our method is that the theoretical findings have an assumption constrained to uniform data like point clouds. The predicted guiding points are not always aligned with the target object, which is indicated in the results of Tables 1 and 4 of the main paper. Furthermore, the editing results show necessary improvements in future works. We have included a Limitation section in our final version. References: [1] Qi et al. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR 2017. [2] Yu et al. Pu-net: Point cloud upsampling network. In CVPR 2018. [3] Lyu et al. A conditional point diffusion-refinement paradigm for 3d point cloud completion. In ICLR 2022. [4] Yuksel, C. Sample elimination for generating Poisson disk sample sets. In Eurographics 2015. --- Rebuttal Comment 1.1: Title: Let us know if you have any further questions Comment: Dear Reviewer aT28, Thanks for your efforts in the review! Please let us know if you have any further concerns before the end of the discussion phase. Thanks, Authors.
Rebuttal 1: Rebuttal: **General Response** Dear ACs and Reviewers, Thanks for your valuable reviews and insightful comments, which have helped us improve our paper. During the initial reviews, Reviewers **uswL**, **aT28**, **c9eB** were inclined toward acceptance. We are glad that our proposed language-driven scene synthesis task "is novel" (Reviewer **RVaZ**), "is an interesting direction" (Reviewer **uswL**), can "alleviate the control burden from the designer's side" (Reviewer **aT28**), and "holds great promise and significance in bridging the gap between research and real-world applications" (Reviewer **c9eB**). We are also encouraged that our proposed guiding points network "is a cool contribution" (Reviewer **YsUe**) and "the experiments show the effectiveness of the proposed pipeline and the learning objective" (Reviewer **aT28**). The common concern raised by Reviewers is the significance of Section 3.2 (Reviewer **uswL**, **aT28**, **c9eB**). We have explained that section 3.2 is important to our paper for two reasons. First, Remark 1.2 establishes Eq. (7), which indicates our paper's central assumption that guiding points $\tilde{S}$ explicitly contribute to the denoising process; thus, $\tilde{S}$ can be learned concurrently with the denoising process. Furthermore, $\tilde{S}$ of Eq. (7) *connects* theory with our network architecture. In the Motivation of the Appendix, we show that by applying transformation matrices to scene entities, we can predict the centroid of target object ($\tilde{S}$ of Eq. 7), leading to the network design in Section 3.3. Reviewer **YsUe** has questioned the working and design choice of our network architecture, we believe that the answer to this question is rooted in Remark 1.2, underscoring that Remark 1.2 is indeed significant. Second, in Section 3.2, we also establish Corollary 2.1, in which we have implied another intuitive observation in L152 that a smaller MSE between the predicted guiding points $\tilde{S}$ and $\mu_0$ corresponds to a more accurate estimation. This implication serves as a measurement method, and therefore, from the result of Table 4, we can conclude that our guiding points were meaningful and well-aligned with the centroids of target objects. While we believe Section 3.2 is significant, we do take feedback from reviewers. We agree that Section 3.2 seems to use too much space (Reviewer **uswL**), and Proposition 2 and Corollary 2.1 may be more suitable to relocate them to the supplementary materials (Reviewer **c9eB**). We have shortened the interpretation of Remark 1.2 and moved Proposition 2 and Corollary 2.1 to the Appendix. We also revised some notations based on the suggestion of Reviewer **aT28** to avoid confusion and potential misunderstandings. Finally, we have included more experiments, including MIME with text (suggested by Reviewer **uswL**), multi-conditional diffusion model (MCDM) (suggested by Reviewer **c9eB**, **YsUe**), and two-stage LSDM (suggested by Reviewer **YsUe**). The outcomes confirm that our proposed methods significantly outperform other comparative methods. We additionally include a study of the impact of the number of human pose frames on our LSDM in the One-page PDF. This study indicates that utilizing more human pose frames does not lead to better performance, thus, verifying that one human pose frame is enough for our method. We are looking forward to responding to any further questions you have on our submission. **Summary of Revision** Integrating the suggestions and feedback from all reviewers, besides fixing typos and notations, we have made the following important updates in the revision. - We have included a Limitation section to describe the limitation of our method, including uniform assumptions of the theoretical findings and failure cases of predicting guiding points. - We have added a Broader Impact section to discuss the potential applications of our problem. - We have shortened the interpretation of Remark 1.2 and moved Proposition 2 and Corollary 2.1 to the Appendix. - We have updated the notations of Remark 1.2 to avoid confusion and potential misunderstandings. - We have moved the motivational example in Section 3 of the Appendix to the beginning of Section 3.3 of the main paper. - We have revised Figure 2 to describe the conducted operations fit better with the explanation in the text. - We have revised Section 4 in the Appendix, including more step-by-step explanations of our network architecture. Table 2 in the Appendix is also appended with a column to describe each component's functionality. A clear linkage is also ensured between the introduced explanations and Table 2. - We have added and discussed experiments with MIME with text, MCDM, and Two-stage LSDM in Table 1. - We have included a study on the impact of the number of human pose frames on scene synthesis results in Section 7 of the Appendix. Pdf: /pdf/1e59200b34fa872711d16b31103c6f4379df9df5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a new task named language-driven scene synthesis. This new task takes text prompts, human motion, and existing objects to generate the next object in the scene. To handle the multiple conditions, they design a guiding points strategy to unify them. It first explicitly predicts a "pseudo" target point cloud from the conditions and then uses these predicted points as a guide for the diffusion model to predict the "truly" target point cloud. They demonstrate their approach is theoretically supportive. In the experiment, they show that their method outperforms the state-of-the-art baselines. Furthermore, they introduce three scene editing tasks that are useful for application. Strengths: - The proposed language-driven scene synthesis task integrates text prompts, human motion, and existing objects as conditions. It is an interesting direction that injects user preference for scene synthesis and thus enables real-world scene editing applications with text prompts. - To handle the multiple conditions, the authors revisit point cloud representation and propose a guiding point concept to use the conditions explicitly. They first predict a "pseudo" target point cloud from the conditions and then use these predicted points to guide the diffusion model to predict the "truly" target point cloud. This explicit strategy injects a strong inductive bias to utilize all the conditions for placing the next object. - The experiment part is intensive and demonstrates the proposed method with text prompts, human motion, and existing objects as conditions achieve the best results compared with baselines. Weaknesses: - I have a question regarding the application of the proposed new tasks. When we take only human motion as a condition for scene synthesis, MIME (Yi et al., 2023) treat this as "turning human movement in a "scanner" of the 3D world." In your proposed task that uses human motion and text prompts, I understand it is useful when we want to place a table in the VR setting. However, what is the use case if the human motion is setting down and using "put a chair under the human" as a prompt? We can not use this in VR since we can not sit without a real chair. - I don't like the presentation of this paper. The reasons are as below. 1) Section 3.2 seems to break the flow of the whole paper. After reading this subsection, I need to back to Section 3.1 multiple times to remind myself of the notation for Section 3.3. It is suggested to make the theoretical support in the main paper shorter and at a high level and move the others to the supplement. 2) Since Section 3.2 uses too much space in the main paper, the authors make Section 3.3 short and unclear. However, this is the main contribution of this paper. It is messy for the audience to read the operations with only unclear text descriptions (also without any shape information for the variables). It is suggested to add equations or pseudocode to describe the operations. 3) Figure 2 is also unclear. For example, for the text, it is stated that "the input key is the text embedding e′, the input queries are the given scene entities." However, in Figure 2, the text embedding e′ and the scene entities are concatenated and then fed to the attention modular. 4) I read the supplement. The motivation part deserves to be moved to the main paper. The implementation details also need to be clarified. Especially, Table 2 is unreadable. Why not list the equations of the operations? - For the baselines, can you add text prompt conditions to MIME for your proposed task for a fair comparison? Considering their method is transformer-based, it should be easy to add text conditions. - For the editing tasks, is the target object necessary to be the M+1 object? In the text prompt, the target object is already indicated. In this case, it seems that we can change any object in the scene instead of only the last one. - In Line 171, you claim that "we extract spatial information from the text prompt by utilizing the off-the-shelf multi-head attention layer." What is the meaning of "off-the-shelf" here? - In Table 2 of the supplement, the output of the text encoder is 1D. I remember that the output of the CLIP text encoder is a list of tokens. Do you apply pooling here? Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Please refer to the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: It seems that the authors do not properly discuss the limitation and the broader impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. **Q1: Application of the proposed new tasks? And the "chair" user case in VR.** >We acknowledge and agree with your example regarding the feasibility of the chair in VR settings, as users do not physically sit on the chair. However, we believe our proposed task is still helpful in applications with non-contact objects. For example, our method can be applied when a user is entering an empty apartment and giving different commands (e.g., "putting a bed in the corner," "placing a table next to the sofa") to arrange the furniture (where physical contact is not mandatory). Using natural language input allows scene synthesis not to rely solely on human motions, which is widely assumed in previous works [1, 2]. Furthermore, our proposed tasks hold significant potential in alternative applications like animation or metaverse. In these contexts, users can control embodied agents to interact with synthesized objects, including the chair in your question (e.g., the char will be generated for the animated character in a metaverse environment and not necessarily need a physical chair for the real user). **Q2: About the writing of Section 3.2 and Section 3.3.** >We genuinely appreciate your insights, particularly regarding the length of section 3.2. Following your suggestions, we have shortened Remark 1.2 and moved Proposition 2 + Corollary 2.1 to the Appendix. The Motivation and Guiding Point network section from the Appendix are moved to the main paper. We have included additional equations in Section 3.3 to provide a step-by-step description of the implementation details of our method in the revised version. **Q3: About Table 2 in the Appendix.** >We have included a column in Table 2 to explain the functionality of each component. Below is a brief revision of this table. | Component | Description | Input shape | Output shape | |-----------|-----------------------------------------------------------|-------------------------|----------------------| | (i) | A human pose backbone extracting features from human motion | N x 3 | N x 3 | | (ii) | A point cloud backbone extracting features from M objects | M x N x 3 | M x N x 3 | | (iii-a) | A text encoder (CLIP or BERT) | Any | 1 x D | | ... |...|... | ...| **Q4: About Figure 2 in the main paper.** >We have fixed Figure 2 based on your suggestion. Our revised Figure is included in the One-page PDF. **Q5: The motivation part deserves to be moved to the main paper.** >We are glad to hear you acknowledge our motivation, and we have moved the motivational example to our main paper. **Q6: Add text prompt conditions to MIME for your proposed task for a fair comparison.** >We implement your suggested method by concatenating CLIP text encoder features with the latent features at the transformer layer of MIME's architecture. Notably, by utilizing text prompts, MIME exhibits marginal improvements over the original results. Nevertheless, the outcome suggests that a latent strategy to incorporate text prompts upon existing works may be insufficient to solve the proposed problem effectively. We report the results in the following table. |||PRO-teXt| || HUMANISE| | |-----------|---------------|--------------|-----------------|-----------------|-----------------|-----------------| |Baseline| CD| EMD |F1| CD |EMD |F1| |MIME| 2.0493 |1.3832 |0.0990 |5.4259 |2.0837 |0.0628| |MIME with text| 1.8424 |1.2865 |0.1032 |4.7035 |1.8201| 0.0849| |LSDM (ours)|**0.5365** |**0.5906** |**0.5160** |**0.7379** |**0.7505** |**0.4395**| **Q7: Is the target object necessary to be the $M+1$ object?** >Certainly not. Our proposed method allows any object to be modified. If we want to change a specific object in the scene, we only need to rearrange the occurrence of other objects so that the target object is in the last order, and then execute the conditional generation. **Q8: What is the meaning of off-the-shelf in L171?** >Off-the-shelf means we utilize conventional architecture without modifying it. In this context, we leverage the standard implementation of a transformer encoder. **Q9: The output of the text encoder is 1D. Do you apply pooling here?** >No, we do not apply pooling. Instead, we utilize the features from the End-Of-Text (EOT) token, resulting in a 1D representation of the text prompt. **Q10: It seems that the authors do not properly discuss the limitation and the broader impact of their work.** >The limitation of our method is that the theoretical findings have an assumption constrained to uniform data like point clouds. The predicted guiding points are not always aligned with the target object, which is indicated in the results of Tables 1 and 4 of the main paper. Furthermore, the editing results show necessary improvements in future works. The broader impact of our paper lies in the potential applications of VR, animation, and metaverse. We have included these details in the final version of our paper. Thank you for your comments. References: [1] Ye et al. Scene synthesis from human motion. In SIGGRAPH Asia 2022. [2] Yi et al. MIME: Human-Aware 3D Scene Generation. In CVPR 2023 --- Rebuttal Comment 1.1: Title: Looking forward to your reply Comment: Dear Reviewer uwsL, Thanks for your hard work in the reviewing process! Please let us know if you have any further questions before the end of the author-reviewer discussion phase. Thanks, Authors. --- Rebuttal Comment 1.2: Title: Response to rebuttal Comment: Thanks for the detailed rebuttal and the revision of the manuscript. The revision makes the presentation more clear now. I would like to raise my score to WA. --- Reply to Comment 1.2.1: Title: Thanks for your reconsideration Comment: Dear Reviewer **uwsL**, Thank you for your reconsideration! Best regards, Authors.
null
null
null
null
null
null
What can a Single Attention Layer Learn? A Study Through the Random Features Lens
Accept (poster)
Summary: This paper explores the learning capabilities of a single attention layer, assuming keys and queries to be random and frozen (as in the random features model). Strengths: The paper is very well written, and the technical claims look formally supported. The paper deals with an important problem, which is to theoretically better characterize the representation power of single attention layers. Weaknesses: Typo in line 95 "theorey" The family of target functions considered in the discussion doesn't contain terms that consider the interaction between token $i$ and token $j$, with $i, j \neq 0$. Attention model success also hinges on capturing the relation between different tokens in the context. It looks like this work leverages on something orthogonal, and I wonder if the results are really representative of the attention layer power. Typo in line 176 "$i$" It is not clear to me how the permutation invariance of the input tokens represents a valuable propertyy over which attention should perform better than MLPRF. It could be that the toy-model analyzed has this power as a natural difference with respect the standard MLPRF, without really capturing the properties of attention. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: You consider single query token models. This semplify the derivation and I agree that is a reasonable simplification. Can you still argue more on this choice? Is this equivalent to consider a model with $N$ queries, after a left inner product with the canonical basis vector $e_1$? Is this method used in practice in single-output attention layers? Has it been considered in previous theoretical work? Are your results optimization agnostic? There is any possible way to bridge the solutions you mention in Theorem1 with the solutions found by a GD algorithm? In this sense, are the assumptions over the constrained class $\mathcal V_M$ going in that direction? It is not clear to me in Theorem2 if your minimizer $\hat V$ is unique, or if you don't need it, as any $\hat V$ would respect your claim. Can you provide more intuition of why the permutation invariance is a property that is intrinsic of tasks where attention outperforms fully connected models? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: No need Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We respond to the questions as follows. > You consider single query token models... Has it been considered in previous theoretical work? The family of target functions considered in the discussion doesn't contain terms that consider the interaction between token $i$ and token $j$ with $i \neq j$.  Our single-query token model is equivalent to **simply looking at the first output token** of a full self-attention (cf. Line 112-116), and abstracting out the query token to be $x_0$ and the key tokens to be $x_{1:N}$. This does not change the model at all, and merely avoids repeating our arguments $N$ times for the $N$ output tokens. Our results can be easily mapped back to full self-attention by repeating our arguments on all $N$ output tokens. In that case, the attention model would involve interaction between any $x_i$ and $x_j$ (instead of just $x_0$ and $x_j$). There exist many literature that constructs sequence-to-sequence transformers to implement a sequence-to-single-token function, by "reading out" from a certain output token (such as the last output token), for example Garg et al. (2022), Akyurek et al. (2022). Our model can be seen as a single-layer version of these transformers. We will add a discussion about this point in our revision. > Is this equivalent to consider a model with $N$ queries, after a left inner product with the canonical basis vector $e_1$? Is this method used in practice in single-output attention layers? It is equivalent to using only the first token as the query. It is not equivalent to applying $e_1^\top$ on the left to all $N$ query tokens. > Are your results optimization agnostic? There is any possible way to bridge the solutions you mention in Theorem1 with the solutions found by a GD algorithm? In this sense, are the assumptions over the constrained class $\mathcal{V}_M$ going in that direction? Our results are optimization agnostic. Any standard convex optimization algorithm (including GD) for the constrained problem on $\mathcal{V}_M$ (or an equivalent regularized problem) can find an approximate solution efficiently. In our experiments, we used Adam with weight decay, which we found was a good enough optimizer on all our problem instances. > in Theorem 2 if your minimizer $\hat{V}$ is unique, or if you don't need it  The minimizer $\hat{V}$ indeed may not be unique, and we don't need uniqueness in Theorem 2; the statement holds for any minimizer (due to the uniform concentration argument). > Can you provide more intuition of why the permutation invariance is a property that is intrinsic of tasks where attention outperforms fully connected models? It is not clear to me how the permutation invariance of the input tokens represents a valuable propertyy over which attention should perform better than MLPRF. It could be that the toy-model analyzed has this power as a natural difference with respect the standard MLPRF, without really capturing the properties of attention. Our attention models can only fit permutation invariant target functions due to the structure of attention heads. On such target functions, our random-feature attention (RFA) models do achieve better sample complexities than RFMLP as they exploit this structure. We remark that we also have results on comparing different weight distributions in RFA, where query-key matrices with non-zero means achieve better sample complexities than zero-mean ones for learning certain functions (Section 4). These results are more intrinstic to the attention structure and do not have analogues in RFMLP models to our best knowledge. See our Additional Response to All Reviewers for more details. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal. I will follow up with additional questions. - It is not equivalent to applying $e_1^\top$ on the left to all $N$ query tokens. I'm a little confused on this. As the soft-max is applied row-wise, introducing the other $N$ tokens in the attention matrix wouldn't change the first row (the one generated by $q_0$). Looking only at this first row later (multiplying the attention matrix on the left with $e_1^\top$), should give the same output your model is considering. - On the permutation invariance of target functions. Sorry if I was not clear enough before. Your work proves that RFA models are better in learning permutation invariant target functions (or other functions defined in Section 4). Can you argue why this could be a reason why attention layers perform better than other architectures (e.g. RFMLP) in natural language tasks? At the moment, I do not see any connection between the target functions you study and what it could be a (very toysh) NLP-target function. Following up on this last point, I want to remark again my point raised as second weakness (relation between token $i$ and $j$), if you could elaborate a bit more. --- Reply to Comment 1.1.1: Title: Response to further questions Comment: Thank you for the thoughtful response. We respond to the further questions as follows. > Is this equivalent to consider a model with $N$ queries, after a left inner product with the canonical basis vector $e_1$? We may have misunderstood your question in our original rebuttal (and we apologize for any confusion). Multiplying $e_1$ to the softmax matrix indeed leads to the same output as our formulation. > The family of target functions considered in the discussion doesn’t contain terms that consider the interaction between token $i$ and token $j$, with $i, j \neq 0$. There are two ways to incorporate the interaction between token $i$ and $j$ in the target function. 1. Currently, we are considering the (simplified) sequence-to-scalar attention models $y_0=f(x_0, (x_i)_{i=1}^n)$ where all tokens only interact with $x_0$. When we map our results back to full sequence-to-sequence attention models $(y_i) _{i=1}^n = f( (x_i) _{i=1}^n )$ (i.e., when we don’t multiply $e_1$ to the softmax matrix), the output tokens $y_i$ and $y_j$ will contain interactions between token $x_i$ and $x_j$. However, analyzing this full sequence-to-sequence model would be similar to our current sequence-to-scalar model (cf. the discussions in Line 112-116 of the main paper). 2. Going beyond our current setting, suppose we restrict to sequence-to-scalar target functions $y_0=f(x_0, (x_i)_{i=1}^n)$ but still want $f$ to involve interaction between all $x_i$ and $x_j$. One way to approximate such functions is to consider multi-layer attention networks instead, with full sequence-to-sequence self-attention as the intermediate layers, plus a final sequence-to-scalar layer. We believe this would be an interesting direction but beyond the scope of the current work. > Can you argue why this could be a reason why attention layers perform better than other architectures (e.g. RFMLP) in natural language tasks? To provide a toy example, consider a simple task where the input sequence is “aabbccada”, and our target function is to count the number of “a”s in the sequence (on this example, 4). By embedding letters as orthogonal vectors, the target function can be represented as a one-layer attention network $f_\star = \sum_{j=1}^N {\rm ReLU}(\langle x_0 W, x_j \rangle) \langle x_j,v\rangle$, which counts the numbers of tokens that are the same with token $x_0$. The number of parameters of this transformer is independent of the input sequence length $N$. However, using MLPs, the sample complexity would naturally depend linearly on the sequence length $N$. This illustrates the benefit of the attention layer.
Summary: The paper considers the representational and generalization properties single-layer scalar-valued transformer models with random key and query matrices and value vectors that can depend on those random matrices. They draw a comparison to the well-studied random-feature models for two-layer neural networks. Concretely: * Theorem 1 proves that functions of the form $f_*(x_{0:N}) = \frac1N \sum_i F(x_0, x_i)$ can be efficiently represented under the random feature, with quadratic dependence on the input dimension $d$ and no dependence on the sequence length $N$. * Theorem 2 extends this approximation-theoretic result to generalization by proving a generalization bound on the empirical risk-minimizing random feature transformer (among transformers with bounded value vectors) that fits a noiseless dataset. The proof follows from bounds on the Rademacher complexity of the family of functions that approximately represent the dataset. * The paper gives several examples in Section 3.3 of functions of the above form and show that they admit much stronger learning rates for random feature attention models than for standard random feature models. Generally, standard random feature models have a substantial dependence on $N$, while random feature attention has no such dependence. * Theorem 3 proves a generalization bound similar to Theorem 2, but in the regime where the random feature matrices (e.g. the product of the key and query matrices) is biased in favor of larger elements on the diagonal. (This is empirically motivated in the appendix by a finding that BERT weight matrices frequently concentrate mass on the diagonals.) They prove generalization bounds for a restricted family of target functions, where high-degree polynomials of $\langle x_0, x_i\rangle$ and low-degree polynomials of $x_0 \otimes x_i$ may be averaged together. They provide several examples showing how this model can reduce the error rates over the standard random feature attention model. * Numerical experiments in Section 5 validate their theoretical results by comparing the error rates as a function of sample complexity of random feature MLPs, random feature attention, and biased random feature attention. Strengths: The work is novel, interesting, and relevant to the growing study of the theoretical properties of transformers. The work formalizes some intuitive advantages that attention models hold over standard MLPs in their ability to compute and aggregate pairwise functions of sequential inputs. Theoretical results are presented cleanly and the proofs that I read appeared correct. The work is creative, and draws inspiration for Theorem 3 from empirical observations. There is interesting follow-up work to be done on understanding the strengths and limitations of this model, especially on the optimization front. Weaknesses: While the bounds are interesting in their own right, the comparisons between random feature attention and MLP models focus on a few particular examples, whose generality is unclear. Moreover, the comparisons are between upper bounds for both models; ideally, the results would contrast with _lower_ bounds for random feature MLPs. ### Minor pointers l244: "scaler" -> "scalar" l657: $W$ in equation block should be $W_m$ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Would you mind explaining if you expect that the $\delta^{-1}$ dependence in Theorem 1 could be improved? As far as I am aware, other random feature approximation papers can often achieve a $\sqrt{\log \delta^{-1}}$ dependence by employing concentration bounds over Hilbert spaces instead of Markov or Chebyshev, as is employed in the proof of Lemma B.1. I would be interested to know why in particular you chose to focus on the diagonally-biased weight initializations, when the plots in Figure 5 appear to make similarly strong cases in favor of selecting random feature matrices $W$ from distributions that impose either sparsity or low rank. (I am okay with these regimes not being included in the paper; I am mostly just interested in the choice.) Would it be possible to offer a more concrete comparison with the Rademacher complexity generalization bounds in this work and single-layer covering numbers-based bound of [Edelman et al](https://arxiv.org/abs/2110.10090)? While the regimes are different, I think the paper would benefit from a brief comparison of the advantages and disadvantages of each. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The work makes modeling assumptions with substantial differences from standard transformer architectures, notably the random weights, the use of a single layer, the lack of softmax, and scalar rather than sequential outputs. The work is upfront those limitations. While the random features regime for 2-layer neural networks is interesting for studying the representational and generalization properties of certain training regimes, it's well-known that the method (as well as all other kernel-based approaches) fall short when learning even simple learning problems like single-index models. (Or as was argued in Example 1 of this paper, how standard random feature models fall short on sequential tasks whose inputs depend exclusively on $x_0$.) Are there similar illustrative examples that random feature attention units fail to efficiently approximate? Moreover, when providing examples that separate random feature MLPs and attention models, it may be helpful to account for a particular examples of a target function where the random feature MLP model offers a better error bound, or to make an argument that RFA models will always have superior rates under certain sequential assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and the suggestions on our paper. We would appreciate if you could champion our paper in the discussions! >While the bounds are interesting in their own right, the comparisons between random feature attention and MLP models focus on a few particular examples, whose generality is unclear. Moreover, the comparisons are between upper bounds for both models; ideally, the results would contrast with lower bounds for random feature MLPs. Our comparisons are indeed between upper bounds. Existing work has also derived lower bounds on the sample complexity of RFMLPs (e.g., learning degree polynomial by RFMLP requires $\Omega((dN)^p)$ samples, (Ghorbani et al., 2021)), which agrees with the upper bound for RFMLP we used though they apply to a special case with a uniform distributional assumption (input vector uniformly distributed on the sphere). We will make sure to emphasize this point and add a discussion of the lower bound in our revision. >Would you mind explaining if you expect that the $\delta^{-1}$ dependence in Theorem 1 could be improved? As far as I am aware, other random feature approximation papers can often achieve a $\sqrt{\log \delta^{-1}}$ dependence by employing concentration bounds over Hilbert spaces instead of Markov or Chebyshev, as is employed in the proof of Lemma B.1. The $\delta^{-1}$ dependence comes from the Chebyshev bound in Lemma B.1, which in turns comes from the precondition $\mathbb{E}[\|v(W)\|_2^2]\le R^2$. Since we only assume a bounded second moment, the Chebyshev bound is likely the best achievable. Further, bounded second moment is likely the best we can do for expressing polynomials due to RKHS norm related arguments (cf. the proof of Lemma B.2). >I would be interested to know why in particular you chose to focus on the diagonally-biased weight initializations, when the plots in Figure 5 appear to make similarly strong cases in favor of selecting random feature matrices $W$ from distributions that impose either sparsity or low rank. (I am okay with these regimes not being included in the paper; I am mostly just interested in the choice.) We acknowledge the reviewer's point that other patterns such as low rank or sparsity may be present in Figure 5. However, the diagonal pattern appears the most predominant, which motivated our setting here. >Would it be possible to offer a more concrete comparison with the Rademacher complexity generalization bounds in this work and single-layer covering numbers-based bound of Edelman et al? While the regimes are different, I think the paper would benefit from a brief comparison of the advantages and disadvantages of each. We will incorporate a discussion of the work cited by the reviewer. However, we note that the settings differ substantially. The previous work considers full attention where $Q, K,$ and $V$ are all trainable, whereas we analyze random feature attention with only a trainable $V$. Consequently, the results of Edelman et al. bound a larger quantity than our result and thus are not directly comparable. >Limitations We appreciate the very detailed suggestions on the limitations of our work, such as target functions that cannot be approximated by RFA or those requiring more samples than RFMLP. We will think carefully about these limitations and make sure to add a discussion of them in our revision. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and promising to make updates to the paper accordingly. I continue to believe that the paper provides an interesting contrast with previous work on RFMLP approximation powers, and my score will remain the same. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for your response and the support on our paper!
Summary: The paper examines the capabilities of a single-layer multi-head attention layer in a scenario where the Key and Query matrices are predetermined and randomly selected from a Gaussian distribution. The only modifiable component is the Value matrices, and when provided with a convex loss, the minimization problem becomes convex. The authors establish expressivity results showing that, whether there is or not there is bias in the Key and Query matrices, the model can effectively learn a class of functions that exhibit permutation invariance to the Key vectors. Furthermore, they demonstrate that the sample complexity of their model is superior to that of two-layer random feature networks for the specific function class. The attention model investigated uses ReLu rather than the more common softmax attention Strengths: - Results are new and although not surprising require certain efforts to prove and could be a useful addition to the literature of random-feature -type models (more to that literature I think rather than to the attention/transformer literature) - Paper is well written (but please correct the many typos) and the authors provide comprehensive explanations of limitations Weaknesses: - A major weakness is that instead of the common softmax function, the authors opt to use the ReLU function. The use of ReLu is nonstandard in transformers and should be mentioned explicitly in the abstract and contributions. Relu and softmax actually have rather different properties and this should be clarified - The fixed Key and Query matrices restrict the learning setting to a linear problem. - The (as the authors admit) seemingly unnatural constraint set in (12) is rather "artificial". Why is such a constraint needed? Besides, knowledge of this K1,K2 requires bounds on B(f_*) making it rather impractical - Thm 2 only applies for bounded loss. Is it possible to extend to say square-loss? - Comparing RFA to RFMLP is based on comparing upper bounds to each other. Is it known if the latter bounds are tight? - As the authors acknowledge, it is rather expected that RFA would beat RFMLP for the specific function class that involves correlations between key and query tokens. While the analysis is non-trivial it is questionable what is that we really learn from those bounds regarding attention? - Even though the results are new and the proofs require efforts, the techniques are rather standard and is not clear if they are revealing of any special properties of attention? If so, this would be interesting to emphasize. - There are a lot of typos - My overall concern with the paper is not about the motivation of the setting (use of ReLu and random features) and also it is a bit unclear what the take home message is (other than a technically solid analysis in a mathematically interesting setting) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why not normalize the model with 1/M rather than including this scaling afterwards on the v constraint set? Isn't that also more revealing of the "proof" strategy that you compare to expectation? - Eventually, can the authors comment on where they see this study leading to? What specifically revealing does it show about attention and what are the authors thoughts on the use of random features in this setting? - How would the results chance if instead of W being Gaussian, K, Q with lower inner dimension are Gaussian? - If not mistaken I think I have seen in the literature some works experimentally validating performance of random feature attention (although with softmax). This might be worth taking a look as it might help the narrative (sorry that I don't remember on top of my head) - Can you comment on how to derive the last inequality in line 736 of Lemma b3 - In Eq. (25), is it \sum_{i=1}^N or \sum_{j=1}^n? - How you guys compare the results in line 263 and 264. - Have you considered running your own experiments for softmax function instead of ReLU (to see the possible difference or implications for your results)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We respond to the comments as follows. ### Response to questions regarding the setting and message: >A major weakness is that instead of the common softmax function, the authors opt to use the ReLU function. A significant portion of our results (such as the generalization bound, and expressing functions within the infinite-head random feature space (cf. Eq(26)) using finitely many heads) would not change if we change the ReLU activation to the softmax. However, the ReLU activation enables a more precise understanding of the infinite-head random feature space, which we show includes natural target functions such as polynomials. This property about ReLU has also been used in many existing work on random feature models, such as Arora et al. 2019 and the many follow-ups after. We agree that extending our analyses to random features with the softmax activation (especially studying the function space it can express) is an interesting question, which we would like to leave as future work. >The fixed Key and Query matrices restrict the learning setting to a linear problem. Our random feature model is linear in the parameter $(v_m)$ but **non-linear in the input $(x_i)_{i=0}^n$**. Such random feature models capture many non-trivial aspects of fully learnable neural networks, and have been a central topic of study in the deep learning theory literature (e.g. Daniely 2017, Mei & Montanari 2021, and the many references therein). > what is that we really learn from those bounds regarding attention? > Even though the results are new and the proofs require efforts, the techniques are rather standard and is not clear if they are revealing of any special properties of attention? If so, this would be interesting to emphasize. > My overall concern with the paper is not about the motivation of the setting (use of ReLu and random features) and also it is a bit unclear what the take home message is (other than a technically solid analysis in a mathematically interesting setting) Our results convey several new messages, including * The sample complexity of random-feature attention does not depend on the sequence length, which contrasts with the random-feature MLP model (Section 3); * A non zero-mean query-key matrix could further improve the sample complexity for learning certain functions of correlations (Section 4). Both results are not present in existing random features theory (for fully-connected neural networks). Please refer to our Additional Response to All Reviewers for more details. ### Response to technical questions: > The seemingly unnatural constraint set in (12) is rather "artificial.”  (12) has two constraints, the total norm and the total norm squared. The total norm squared constraint is essential and natural (equivalent to an $L_2$ regularization on $(v_m)$). By contrast, the total norm condition is a technical condition merely for a slightly tightened sample complexity. > Thm 2 only applies for bounded loss. Is it possible to extend to say square-loss? For squared loss with unbounded labels, we believe a similar result would still hold by using standard machineries such as truncation. We assumed the loss is Lipschitz and bounded at $0$ for simplicity only. > Comparing RFA to RFMLP is based on comparing upper bounds to each other. Our comparisons are indeed between upper bounds. Existing work has also derived lower bounds on the sample complexity of RFMLPs (e.g., learning degree $p$ polynomial by RFMLP requires $\Omega((dN)^p)$ samples, (Ghorbani et al., 2021)), which agrees with the upper bound for RFMLP we used though they apply to a special case with a uniform distributional assumption (input vector uniformly distributed on the sphere). We will make sure to emphasize this point and add a discussion of the lower bound in our revision. > Why not normalize the model with 1/M We did not include any normalization for simplicity of the presentation only, and we agree that normalizing by $1/M$ can make the model more intuitive. We will carefully think about whether to add this normalization this in our revision. > How would the results chance if instead of W being Gaussian, K, Q with lower inner dimension are Gaussian? If $Q,K$ are Gaussian and have large inner dimensions, $W=Q^\top K$ would be have similar to a Gaussian and thus we don't expect our results to change much. However, if $Q,K$ have small inner dimensions, then $W$ would be low-rank and thus behave very differently. > How to derive the last inequality in line 736 of Lemma b3? The equation in line 736 of Lemma b3: We first drop the $\log(dM)/n$ term since $n\geq \log(dM)$. Then we use the bound for the expectation of the maximum of the sub-gaussian r.v.'s. >How you guys compare the results in line 263 and 264. The arguments can be found in the proof of Proposition C.2 (Line 919-930). --- Rebuttal Comment 1.1: Title: Use of ReLU Comment: Thank you for your response. I continue to think that this is a good contribution to the literature on RF models , but one that bears limitations when phrased in the context of the transformer self-attention mechanism: (1) use of ReLU rather than softmax, (2) the requirement number of heads at least proportional to n (e.g. examples 1,2,4,5) (3) treating Q,K as single parameter matrix W ignoring the potentially low-rank structure. I understand that the use of ReLU makes the analysis more tractable. As the reviewers mention, this allows them to leverage existing techniques developed for MLPs. On the other hand, the standard self-attention module relies on softmax and it is not clear whether results for ReLU are also applicable (and to what extent) to softmax. I believe it would be particularly interesting extending the RF analysis to softmax and this probably requires new tools. In order to leave clear room for such extensions, I strongly believe that the use of ReLU in the paper should already be mentioned in the abstract and introduction (if not at the title). --- Reply to Comment 1.1.1: Title: Response on limitations Comment: Thank you for the response and the constructive suggestions. We agree that the three limitations you pointed out are valid concerns that should be addressed, and we will aim to improve upon these aspects in future work. Here we would like to raise a few points about these limitations that may mitigate their impact in the current work to some extent. > In order to leave clear room for such extensions, I strongly believe that the use of ReLU in the paper should already be mentioned in the abstract and introduction (if not at the title). We agree that our use of a ReLU-based self-attention module differs from the standard softmax-based approach. As suggested, we will emphasize this in our abstract to leave room for extensions to softmax attention. However, we believe the ReLU attention is a pragmatically useful starting point. For example, on the practical side, some recent work has found that larger-scale ReLU-based transformers perform similarly as standard softmax-based ones [73]. [73] K. Shen, J. Guo, X. Tan, S. Tang, R. Wang, and J. Bian. A study on relu and softmax in transformer. https://arxiv.org/abs/2302.06461 > the requirement number of heads at least proportional to n (e.g. examples 1,2,4,5) Our general result (Theorem 2) does not require a hard lower bound like $M\ge O(n)$. Then, in our examples, we chose $M\ge O(n)$ simply to make the approximation error term (induced by finite $M$) in Theorem 2 smaller than the generalization term, so as to simplify the rate. Such a treatment (choosing a high-enough number of neurons to make the approximation error negligible) is also standard in the analysis of over-parametrized random feature / neural tangent kernel models, see e.g. Arora et al. (2019, Theorem 5.1).
Summary: This paper first studies the expressive power of a random-feature attention layer and then provides the generalization gap of the attention layer. A sample complexity bound is shown, which indicates a larger number of attention heads help the generalization. This paper also compares the results between attention and MLP layers given several target functions. Numerical experiments support the theory. I am willing to update my score after the rebuttal if my concerns are addressed with revisions in the manuscript. ------------------------------------------------------------------- After rebuttal, I increased my score to 5. Please see comments below. Strengths: 1. The paper is clear and well-written. The proof seems to be solid. 2. The random feature analysis of attention layers is new to this community. The analysis combines the expressive power and the generalization gap, which is better than some existing works. 3. The comparison between attention layers and MLP covers a lot of target functions, which is impressive. Weaknesses: 1. Equation 3 is important for further derivation. From my understanding, Equation 3 indicates that the attention map is fixed for all data. I think attention layers usually have different attention maps for different data. I believe this is a big limitation of this work. Some clarification about "fixed attention" is needed to avoid misunderstanding. 2. An attention map fixed at random initialization makes the attention layer useless and meaningless. I think at least the attention layer should be trainable. 3. About the related works of generalization analysis of Transformers, the references discussed are too old. Here are some recent references. I would like to see a discussion of these works (Some of these works are concurrent works). [1] Jelassi et al., 2022, "Vision Transformers provably learn spatial structure. " [2] Li et al., 2023, "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity. " [3] Oymak et al., 2023, "On the Role of Attention in Prompt-tuning. " [4] Tarzanagh et al., 2023, "Max-Margin Token Selection in Attention Mechanism. " Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the comparison in Sections 3.3 and 4.2 a comparison between upper bounds (sufficient conditions) for RFA and RFMLP? I think so. If so, this should be mentioned. This is not very rigorous, but I am ok with it because it is difficult to compare an upper bound and a lower bound. 2. What does "permutation invariant of the target function" mean? Why is it needed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. > Equation 3 is important for further derivation. From my understanding, Equation 3 indicates that the attention map is fixed for all data. I think attention layers usually have different attention maps for different data. I believe this is a big limitation of this work. Some clarification about "fixed attention" is needed to avoid misunderstanding. Could the reviewer kindly clarify what "fixed for all data" means in the context of Equation 3? We want to make sure we fully understand your perspective to make appropriate clarifications in the paper. As in standard practice, our attention matrices $(Q_m, K_m, V_m)$ are the same for all data points (cf. Equation 1). > An attention map fixed at random initialization makes the attention layer useless and meaningless. I think at least the attention layer should be trainable. Our attention layer is still learnable since the value matrices $V_m$ are learnable. We only freeze the inner layer (the query-key matrices). This corresponds to the Random Feature setting, which is widely considered in deep learning theory. > About the related works of generalization analysis of Transformers, the references discussed are too old. Here are some recent references. I would like to see a discussion of these works (Some of these works are concurrent works). [1] Jelassi et al., 2022, "Vision Transformers provably learn spatial structure. " [2] Li et al., 2023, "A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity. " [3] Oymak et al., 2023, "On the Role of Attention in Prompt-tuning. " [4] Tarzanagh et al., 2023, "Max-Margin Token Selection in Attention Mechanism. " We appreciate the reviewer for suggesting these works, and we will make sure to incorporate and discuss these studies in our revised version. A key distinction between these works and ours is that they only focus on a small class of target functions with special properties, whereas our work covers a large generic class of target functions (polynomials of input tokens of any degree). >Is the comparison in Sections 3.3 and 4.2 a comparison between upper bounds (sufficient conditions) for RFA and RFMLP? I think so. If so, this should be mentioned. This is not very rigorous, but I am ok with it because it is difficult to compare an upper bound and a lower bound. Our comparisons are indeed between upper bounds. Existing work has also derived lower bounds on the sample complexity of RFMLPs (e.g., learning degree $p$ polynomial by RFMLP requires $\Omega((dN)^p)$ samples, (Ghorbani et al., 2021)), which agrees with the upper bound for RFMLP we used though they apply to a special case with a uniform distributional assumption (input vector uniformly distributed on the sphere). We will make sure to emphasize this point and add a discussion of the lower bound in our revision. >What does "permutation invariant of the target function" mean? Why is it needed? For any function $f(X_1,\ldots,X_n)$ of $n$ tokens, it is permutation invariant if $f(X_{\sigma(1)},\ldots,X_{\sigma(n)}) = f(X_1,\ldots,X_n)$ for any permutation $\sigma:[n]\to[n]$. We consider permutation invariant target functions, as attention layers can only fit these functions (due to the structure of attention heads). We will clarify the meaning of "permutation invariance" in our revision. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank the author for the response. I am sorry for the confusion. It is not fixed attention "for all the data. " What I want to emphasize is the fixed attention. Because $W_m$ is fixed during the training, which means the attention map is fixed. So this question is similar to my second question. Your answer to my second question says $V_m$ is trainable. However, I feel a trainable attention map is essential to learning Transformer. I know this setting can hardly be changed because of the random feature framework. I can consider increasing the score if the author can answer these two alternative questions properly. The main point is that I want to figure out how the random feature analysis is useful for studying the generalization of the Transformer. 1. What does it imply for $x_0$ and $x_i$ if $\sigma(<W_m, x_0x_i^\top>)$ is activated, and what does it imply when it is not activated? The $\sigma(\cdot)$ you use is ReLLU, right? I want to know, although $W_m$ is randomly initialized and fixed, can it characterize some meaningful relationship between the query and key? 2. Can you provide an intuition of the comparison between Transformer and MLP, i.e., what is the reason that Transformer is better than MLP in terms of generalization? I think the logic should be that it comes from the self-attention layer because it is the major difference between these two architectures. However, I cannot see it from the theory in this paper because the self-attention layer seems not important in the analysis. I am overall satisfied with the response to other questions. A larger range of target functions is a good contribution. --- Reply to Comment 1.1.1: Title: Response to the further questions Comment: Thank the reviewer for the prompt response and the clarification to our questions. To answer the two alternative questions about random feature attention: 1. As we chose $\sigma(\cdot)$ to be ReLU, $\sigma(\langle W_m, x_0x_i^\top\rangle)$ is activated iff $x_i^\top W_mx_0>0$, i.e. $x_i$ and $x_0$ has a positive correlation when transformed by $W_m$. A higher correlation yields a higher attention score. * In a simpler scenario where $W_m=U_m^\top V_m$ with $U_m,V_m$ being orthogonal matrices, this is equivalent to $(U_Mx_i)^\top V_mx_0 > 0$, i.e. $x_i$ and $x_0$ have a positive correlation when rotated by $U_m$ and $V_m$ respectively. * In the general case where $W_m$ is a random matrix with SVD $W_m=U_m^\top D_m V_m$, the condition requires the (diagonally scaled) vectors $D_m^{1/2}U_mx_i$ and $D_m^{1/2}V_mx_0$ to have a positive correlation. Thus, different randomly initialized attention heads $(W_m)_{m\in[M]}$ induce "correlation tests" with different rotations and scalings, which could characterize meaningful similarity relationships between the query $x_0$ and key $x_i$. 2. Transformers (in our case single-layer attention models) generalize better than MLPs as the attention models naturally admit a permutation-invariant structure between tokens, i.e. ${\rm Attn}(x_0; x_1, \dots, x_N) = {\rm Attn}(x_0; x_{\sigma(1)},\dots,x_{\sigma(N)})$ where $\sigma:[N]\to[N]$ is any permutation. As a result, the sample complexity of learning with attention models does not scale with the sequence length when learning such permutation-invariant target functions. Concretely, for fitting target functions of the form Eq. (7) (which are permutation invariant), the sample complexity of attention model only depends polynomially on the individual token dimension $d$ but not the sequence length $N$, whereas the sample complexity of MLPs depend polynomially on $dN$. (See Examples 1-3 for the concrete comparisons.) We agree both questions are important for understanding the role of our random-feature attention model in terms of understanding transformers. We will expand on these points in our revision.
Rebuttal 1: Rebuttal: **Additional Response to All Reviewers** We thank all reviewers again for their valuable feedback on our work. Here we would like to highlight our contributions again, which we believe were missed by some reviewers: - The random features (RF) model is an important and widely-studied model in deep learning theory (e.g; Daniely 2017, Mei & Montanari 2021, and the many references therein), widely studied for fully connected neural network. This paper **initiates the study of random feature attention models and provides a set of end-to-end learning results**, which we believe should be of broad interest to the community and could inspire follow-up works. - Our Section 4 considers the case where the $W$ matrices in random-feature attention are initialized as identity plus standard Gaussian, going beyond the standard Gaussian initialization. This setting was motivated by the empirical weight patterns of *pretrained* BERT models. We show that such initialization has non-trivial advantages (in sample complexity upper bounds) over standard Gaussian initialization, for learning certain functions depending on the correlation of tokens. This is a new message that **did not have an analog in existing random feature theory** to our best knowledge. - Our paper introduces several new techniques, such as sharp analyses of the Rademacher complexity for random-feature attention models (Line 188-193), as well as approximating functions of correlations using identity-biased random-feature attention (Line 283-288 & concrete statement in Lemma C.2). We believe these techniques **could be useful for future work on analyzing attention**. We would be grateful if the reviewers could reconsider the evaluation taking into account our contributions. We will highlight these points more clearly in our revision. Additionally, we appreciate the reviewers for catching the typos, which we will fix in the revision.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transitivity Recovering Decompositions: Interpretable and Robust Fine-Grained Relationships
Accept (poster)
Summary: This paper aims at fine-grained representation learning. The authors state that local-to-global relationships leveraged in recent fine-grained visual categorization (FGVC) works are abstract. To make such abstract relational representations more human-understandable, the authors first theoretically show the existence of semantically equivalent graphs for abstract relationships and derive their key information theoretic and topological properties. Then, the authors present Transitivity Recovering Decompositions (TRD), which is a graph-space search algorithm that identifies interpretable equivalents of abstract emergent relationships at both instance and class levels and with no post-hoc computations. The authors run experiments to demonstrate the effectiveness of their methods. Strengths: + This paper is well-written and easy to follow. + The motivation is strong, and the technique in this paper is solid. + The proposed method reaches SOTA performance on standard small, medium, and large scale FGVC benchmarks. The authors conduct ablation studies and present visualization results to show the effectiveness of their method. Weaknesses: - The authors state that the local-to-global (emergent) relationships leveraged in existing methods are abstract fashion. However, no detailed and deep explanation for the term "abstract" is provided in this paper. - The authors propose a graph-based model for fine-grained visual categorization (FGVC) recognition. However, the authors do not provide a comprehensive review of the relevant literature in the field, such as [1-3], and do not compare their approach with other relevant methods in their experiments. - The author's comparison seems unfair. In the field of FGVC, a common practice is to resize images to 448$\times$448, but the authors do not follow this approach. Furthermore, there is a lack of experimental details, which raises concerns about the validity of the experiments. I am unsure whether the pre-trained models used in the study are sourced from ImageNet-1K or ImageNet-21K. Additionally, I find the results of TransFG and FFVT on the Aircraft dataset in Table 1 confusing. While I understand that ViT-based models may have limitations on the current dataset, the reported results seem unexpected. [1] Where to Focus: Investigating Hierarchical Attention Relationship for Fine-Grained Visual Classification, ECCV22\ [2] Weakly Supervised Posture Mining for Fine-Grained Classification, CVPR23\ [3] SR-GNN: Spatial Relation Aware Graph Neural Network for Fine-Grained Image Categorization, TIP Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mentioned the limitations and societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Definition of abstract** Although we briefly refer to what we mean by “abstract” in Line 23-24, we agree that it requires further elaboration and detailing. We provide an in-depth definition below, which we will also add to the final version. Existing works that leverage relational information for representation learning typically adopt the following approaches: - Modelling all possible ways the local views can combine to form the global view through a transformer, and distill out the information about the most optimal combination in the summary embedding [8, 9]. - Modelling the underlying relations through a GNN, but performing an aggregation on its outputs [91, 25]. \ Both 1 and 2 produce vector valued outputs, and such, cannot be decoded in a straightforward way to get an understanding of what the underlying emergent relationships between views are. This lack of transparency is what we refer to as “abstract”. The above set of methods exhibit this abstraction not only at the instance, but at the class-level as well, which also appear as vector-valued embeddings. On the contrary, an interpretable relationship encoder should be able to produce graphs encoding relationships in the input, intermediate and output spaces, while also representing a class as relationships among concepts, instead of single vectors that summarize information about emergence. **2. Related works** We thank the reviewer for pointing us to the related works [a, b, c], and we apologize for missing these out in our literature survey. The table below shows a performance comparison on the common benchmarks between ours and [a, b, c]. Due to the short rebuttal timeframe, we were unable to evaluate all of these methods on the remainder of our datasets. However, on all of the common datasets, our TRD can still be seen to surpass all of [a, b, c], thanks to its inherent robustness. **Note that we designed TRD not with the objective of classification accuracy improvement, but rather for providing interpretability to existing relational representation based algorithms.** \ [a] is a discriminative part discovery based approach and does not involve computing any cross-view relationships. Although both [b] and [c] provide graph-based intermediate image representations, they aggregate the full graph into a single vector-valued output, keeping the final image and class representations abstract.\ We will add the above discussions on [a, b, c] in section 4.3 as well as the below comparison to Table 1 in the final version. | | CUB | FGVC Aircraft | Stanford Cars | |------------------|:---------:|:-------------:|:-------------:| | WhereToFocus [a] | 90.80 | 94.70 | 95.30 | | PMRC [b] | 91.80 | 94.80 | 95.40 | | SR-GNN [c] | 91.90 | 95.40 | 96.10 | | **TRD (Ours)** | **92.10** | **95.60** | **96.35** | **3. Input size, pretrained weights, and performance of transformers** - Input size - It is true that certain works on FGVC that consider the complete image as the only input to the model do resize the image to 448x448. However, we follow recent SoTA FGVC approaches that use relation-agnostic encoders to extract global and local views [9, 78, 88]. In particular, our view extraction process is exactly the same as Relational Proxies (NeurIPS 2022) [9], with the same input image resolution and backbone (relation-agnostic) encoder. The above approaches first extract the global view from the input image, and crop out local views from that. Since there are two scales at which the image is cropped, all the crops are resized to 224x224. In practice, this provides a similar resolution to resizing the full image to 448x448. Following the reviewer’s proposition, we have now re-trained and re-evaluated our model with the input images resized to 448x448, keeping the remainder of the process of view extraction the same. Below we provide our results on multiple datasets: | | Soy | FGVC Aircraft | Stanford Cars | |--------------|:-----:|:-------------:|:-------------:| | TRD: 224x224 | 52.15 | 95.60 | 96.35 | | TRD: 448x448 | 52.23 | 95.62 | 96.39 | With the resized input of 448x448, we observe minor improvements in the performance of TRD. - Pretrained weights - We thank the reviewer for pointing this out. The ResNet50 that we use as our relation-agnostic encoder were pre-trained on ImageNet1K, following existing literature [9, 78, 88]. We provide the details of our experimental settings in Section 4.1 Experimental Settings and Datasets. If the reviewer feels that we have missed something more, we would be happy to add those details as well. - Performance of TransFG and FFVT - Unfortunately, neither TransFG, nor FFVT report results on FGVC Aircraft in their original papers. We found the only evaluation of these methods on FGVC Aircraft to be present in [9]. For the sake of assurance, we reevaluated both TransFG and FFVT on FGVCAircraft and were able to replicate the numbers reported in [9], which is what we also report in the paper. We additionally note that our method also surpasses both TransFG and FFVT by significant margins on all other datasets as well (not just FGVC Aircraft), especially the small scale ones. We conjecture that both FFVT and TransFG being transformer-based models, are not able to cope very well with the small dataset sizes and low sample diversity of FGVC Aircraft (as well as the small scale datasets), leading to relatively lower accuracies. [a] Where to Focus: Investigating Hierarchical Attention Relationship for Fine-Grained Visual Classification, ECCV22 \ [b] Weakly Supervised Posture Mining for Fine-Grained Classification, CVPR23 \ [c] SR-GNN: Spatial Relation Aware Graph Neural Network for Fine-Grained Image Categorization, TIP --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks for the detailed rebuttal, my concerns are properly addressed.
Summary: The authors propose TRD, an algorithm that decomposes both input images and output classes into graphs over views by recovering transitive cross-view relationships for fine-grained visual categorization. Strengths: 1. The paper is well written and easy to follow. 2. The proposed TRD is demonstrated both theoretically and empirically. Weaknesses: 1. It seems that most of the experimental results in Table 1 are copied from the Relational Proxies paper. But the experimental setup seems different. For example, the number of local views is different. 2. As we know, deep GNNs usually suffer from over-smoothing issue, i.e., as the number of layers increases, the learned representations become nearly indistinguishable and the performance degrades significantly. The authors use an 8-layer GAT with 4 attention heads in each hidden layer. The reviewer wonders how the number of layers affects the performance of your models. Does the over-smoothing phenomenon exist? 3. The proposed method seems strongly related to Relational Proxies. Can you explain more about the relationship and difference between these two methods? Moreover, compared with Relational Proxies, the performance gains of TRD are very marginal, as can be seen from Table 1 and Figure 5. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the efficiency/time complexity of the proposed method? The proposed TRD needs to construct multiple graphs. Considering the marginal performance gains, is TRD more efficient than Relational Proxies? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Experimental settings** We agree that the number of local views is the only hyperparameter in which we differ from Relational Proxies, but the remaining settings are the same (Line 264). We note that our experiment on robustness (Figure 5) tests both TRD and Relational Proxies with the same number of views, and TRD can be seen to surpass Relational Proxies in the majority of the cases. Following the reviewer’s suggestion, we evaluate both Relational Proxies and TRD with the same number of local views in the normal (no explicit addition of noise) setting on the FGVC Aircraft dataset, and report our findings in the table below. TRD marginally outperforms Relational Proxies for all values of the number of local views, and exhibits a trend of scaling in accuracy with increasing number of local views. Relational Proxies, on the other hand, does not seem to benefit from increasing the number of local views, possibly due to its lack of robustness to noisy views. | # Local Views | 8 | 16 | 32 | 64 | |--------------------|:-----------:|:-----------:|:-----------:|:-----------:| | Relational Proxies | 95.25 | 95.30 | 95.29 | 95.31 | | **TRD (Ours)** | **95.27** | **95.45** | **95.52** | **95.60** | **W2. Over-smoothing** We thank the reviewer for suggesting this experiment, as it provides valuable insights into the ability of the GAT in TRD to learn the emergent relationships. We had initially evaluated using GATs of up to 16 layers, and had found the 8 layer version to be the best. Following the reviewer’s suggestion, we have evaluated TRD using GATs of up to 64 layers on FGVC Aircraft, presenting our findings below. We see that the performance does drop beyond 8. To validate whether this is due to the oversmoothing phenomenon, we measure the degree of distinguishability among the nodes by taking the average of their pairwise $L_2$ distances. The table shows that the distinguishability also decreases as we increase the number of layers, suggesting that the over-smoothing phenomenon does occur, as the reviewer had correctly speculated. Incidentally, even under the light of the above experiments, the 8-layer GAT remains an optimal choice for our problem. | GAT-Depth | 4 | 8 | 16 | 32 | 64 | |--------------------|:-----:|:-----:|:-----:|:-----:|:-----:| | Accuracy | 95.05 | **95.60** | 95.32 | 94.78 | 94.20 | | Distinguishability | 0.87 | 0.63 | 0.49 | 0.21 | 0.09 | **W3. Relational Proxies** The similarity between TRD (proposed) and Relational Proxies is in that they both leverage local-to-global emergent relationships to achieve the sufficiency criterion (Appendix A.3). The main difference lies in the central objectives of the two works. Relational Proxies aims to achieve SoTA performance in FGVC by learning abstract relational representations of local-to-global emergence. On the other hand, we aim to make the process of relational representation learning transparent and interpretable, by performing all computations in terms of graphs representing the learned relationships, while maintaining their performance.\ The trade-off between performance and interpretability is a well-known phenomenon in the literature [29, 67, 21, 35]. However, TRD is not only able to retain the performance of the existing SoTA Relational Proxies, but provide marginal gains as well. This can be attributed to the (provable) robustness of TRD to noisy views. Whatever performance degradation comes from the decomposition of the abstract latent representations into graphs over image views, is recovered by the intrinsic robustness of the transitivity recovery objective. This can be seen in action in Rows 1 - 3 in Table 2 of our Ablation Studies.\ To summarize, **our primary objective is not to surpass SoTA in FGVC, but to provide interpretability to existing SoTA algorithms that leverage relational information to achieve maximal expressivity, while retaining their performance**. The degradation in performance that comes from interpretability is compensated for by the inherent robustness of TRD, allowing us to achieve marginal performance gains over Relational Proxies, even though classification SoTA advancement is not our main objective.\ Apart from our theoretical analyses (Sections 3.1 and 3.2) establishing the equivalence of abstract relational representation learning algorithms and our proposed TRD, we perform empirical evaluations in the Supplementary Section 3 to extract the instance-level relationships learned by Relational Proxies via a post-hoc explainer. Via TRD, we can obtain such relationships in an ante-hoc manner directly as part of the inference pipeline, without the need for any post-hoc explainers, and not only at the level of the instance, but also for the class. **Q. Compute Cost** Below we provide the computational costs of Relational Proxies and TRD in terms of wall clock time evaluated on FGVC Aircraft (same experimental settings including # local views): | | Average Inference Time (ms) | Training Time (hrs) | |--------------------|:---------------------------:|:-------------------:| | Relational Proxies | 130 | 22 | | **TRD (Ours)** | **110** | **15** | We can see that TRD is significantly more efficient than Relational Proxies in terms of both single sample inference as well as training time until convergence. This is because of the following reasons: - The Complementarity Graph in TRD is constructed exactly once before training, and the semantic relevance graph, as well as the proxy graph are learned as part of the training process. - TRD does not involve updating the relation-agnostic encoder $f$, which is a ResNet50, as part of the training process. Relational Proxies requires it to be updated, thereby exhibiting computationally heavier forward (as local view embeddings cannot be pre-computed) and backward passes. --- Rebuttal Comment 1.1: Title: Could you please read the rebuttal and share your thoughts at your earliest convenience? Comment: Cheers, AC
Summary: The paper presents a novel perspective on interpretable representation learning, introducing Transitivity Recovering Decompositions (TRD) as a method for identifying graphs that can learn local-to-global representations. The proposed approach achieves state-of-the-art (SOTA) performance on Fine-Grained Visual Classification (FGVC) datasets while maintaining interpretability. The TRD is well-defined, supported by theoretical and empirical analysis, and conducts thorough interpretability and robustness experiments. Strengths: 1. The authors provide a well-defined and theoretically supported Transitivity Recovering Decompositions (TRD) method, which is further validated through empirical analysis. 2. The experiments conducted on FGVC benchmarks demonstrate consistent SOTA performance across multiple datasets, although the improvements are marginal. 3. The paper includes comprehensive interpretability and robustness experiments, effectively showcasing the effectiveness of TRD to a certain extent. Weaknesses: 1. The introduction section requires improvement in terms of providing a high-level overview of the proposed TRD. A simple end-to-end pipeline overview in the introduction would greatly benefit readers' understanding. 2. The inference pipeline of the proposed system is not clearly described. Including the training and testing pseudo code would substantially enhance the clarity of the paper. 3. The robustness analysis is limited. To comprehensively evaluate the interpretability of TRD, it is crucial to observe the results under the influence of causal interventions. For instance, replacing a percentage of local views from another class and observing the results would provide valuable insights. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. What if there are multiple high-level concepts along with low-level concepts present in a training distribution? A quick analysis of datasets such as CIFAR100, or (if possible) ImageNet can put some light on this. Like, multiple breeds of dogs, cats, and birds are in a single classification task. 2. How can the presence of a different local view (belonging to another class) affect the forming of cliques? (Any qualitative example or quantitative number can help understand the potential impact of TRD) 3. What are the values of empirically defined hyper-parameters (such as delta and gamma) across the datasets? A plot/table showing the effect of such variables across the datasets is important to measure the stability of the TRD. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately discussed the limitations and potential positive societal impact of their work. No further discussion is necessary in this regard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Overview** We thank the reviewer for pointing this out. We will add the following to the final version: "After decomposing an input image into its constituent views following related literature [9, 78, 88], we initialize the relational representation by forming a graph through connecting complementary sets of nodes. This allows information to flow across disjoint localities, while disregarding redundancy. We then propagate this initial image graph through a trainable GNN. We obtain the class proxy graph by an online clustering of the instance node and edge embeddings. We train the GNN to match the instance and the proxy graphs by recovering transitive relationships. Concretely, this is achieved by minimizing the edit cost between the two, approximated by a learnable form of the Hausdorff Edit Distance." **W2. Pseudocode** Below, we provide PyTorch style training and inference pseudocodes for TRD. We will release our full implementation upon acceptance.\ **Preprocess** ``` def get_graphloader(X, Y, mode='train'): labels_to_instances = {} graphs = [] # Complementarity graphs for x, y in dataloader(X, Y): Z_l = [local_views(x)] + [global_view] Z_v = [f(z) for z in Z] G_c = {nodes: Z_v, edges: [(z_i, z_j, repeat(1 / dot(z_i, z_j), n)) for z_j in Z_l for z_i in Z_l]} # Adding global view and its edges G_c.nodes += z_g G_c.edges += [(z_g, z_l) ones(n) for z_l in Z_l] graphs.append(G_c) if mode == 'train': labels_to_instances[y] += G_c if mode == 'test': return dataloader(graphs) # Initialize proxy graphs for label in Y: e = random.choice(labels_to_instances[label].all_edges()) n = unique(e[0] + e[1]) # Instance embeddings are only used as initializations. Proxies must be independent entities. Y_proxies[label].nodes = deepcopy(n) Y_proxies[label].edges = deepcopy(e) labels_to_instances[label] = Y_proxies return dataloader(labels_to_instances) ``` **Training** ``` for G_c, P in get_graphloader(X, Y): G_s = phi(G_c) # semantic relevance graph # Assignment of instances to proxies scores = pairwise_hausdorff(G_s, P) # [59] preds = sinkhorn(scores) # [7] probs = softmax(preds / temp) loss = proxy_anchor(probs, y) # [36] update(phi.params) # Update GNN update(P) # Update proxy centroids ``` **Inference** ``` # X: Test images; P: Trained Class Proxies for G_c in get_graphloader(X, None, mode='test'): G_s = phi(G_c) # semantic relevance graph # Assignment of instances to proxies scores = pairwise_hausdorff(G_s, P) # [59] pred = argmax(scores, dim=1) ``` **W3 and Q2. Causality** We thank the reviewer for suggesting these insightful experiments on robustness. To this end, we train TRD by replacing a subset of the local views for each instance with local views from other classes, both during training and inference. As the proxies are obtained via a clustering of the instance graphs, these local views consequently influence the proxy graphs. We report our quantitative and qualitative findings in the **pdf attached as part of the global response.**\ TRD significantly outperforms Relational Proxies [9] at all noise rates, and the gap between their performances widens as the percentage of corruption increases (Tab 1, attached pdf). Qualitatively (Fig 1, attached pdf), our model successfully disregards the views introduced from the negative class at both the instance and proxy level. Such views can be seen as being very weakly connected to the global view, as well as the correct set of local views that actually belong to that class. **Under this causal intervention, the TRD objective is thus equivalent to performing classification while having access to only the subgraph of clean views from the correct class.** **Q1. Coarse-grained** Following the reviewer’s suggestion, and existing FGVC literature [9, 19], we evaluate the contribution of our novel Transitivity Recovery objective in the coarse-grained (multiple fine-grained subcategories in a single class) and fine-grained subsets of ImageNet, namely Tiny ImageNet and Dogs ImageNet (Stanford Dogs) and report our findings below. Although our method can surpass existing SoTA in both the settings, larger gains ($\Delta$) are achieved in the fine-grained setting, suggesting that TRD is particularly well suited for that purpose. | | Tiny ImageNet | Dogs ImageNet | |----------------------------|:-------------:|:-------------:| | MaxEnt [19] | 82.29 | 75.66 | | Relational Proxies [9] | 88.91 | 92.75 | | $a =$ w/o Transitivity Recovery | 88.10 | 91.03 | | $b =$ with Transitivity Recovery | 89.02 | 93.10 | | $\Delta = (b - a)$ | 0.92 | **2.07** | **Q3. Bounds** Delta and gamma are not hyperparameters, but intrinsic properties of the dataset, and as such, cannot be controlled explicitly. Respectively, they denote implicit distance and mutual information bounds that the learning process aims to optimize for. We leverage them to prove the theoretical equivalence of existing abstract relational representation learning algorithms and our interpretable version. Specifically, delta is an estimate of the best achievable error (Bayes error rate). Gamma is a way of quantifying the amount of emergence encoded by pairs of views, i.e., how important it is to jointly (rather than individually) observe the two views in order to determine the global structure of the object.\ One implicit way of controlling both delta and gamma at the level of the dataset is by altering the set of local views. We do this as part of addressing W3 and Q2, by introducing local views from other classes at both the instance and the proxy level, as well as our experiment on Robustness to Noisy Views in the main manuscript. The results show that TRD is able to efficiently optimize for both of these parameters by identifying the most relevant subgraph comprising local views of the target class. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I appreciate the response and the thorough additional experiments conducted by the authors. These efforts have considerably clarified several of my uncertainties. Particularly, the incorporation of new graph visualizations involving causal interventions provides substantial support for the paper's central proposition. For optimal clarity and comprehension, I recommend that the authors consider enhancing their presentation in accordance with the suggestions outlined. I am inclined to elevate my rating, keeping in mind the anticipation that the authors will adequately substantiate the rationale behind the new experiments in the forthcoming revised version, should it be accepted. --- Reply to Comment 1.1.1: Title: Note of thanks Comment: We thank the reviewer for going through our rebuttal response, validating the findings, and increasing their score. We will update the final version with all the new experimental results along with their underlying rationale, as well as the clarifications that we have provided as part of the rebuttal.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments and feedback. We have addressed their individual concerns in their respective rebuttal sections. Here we attach some qualitative results for addressing the comment by Reviewer mGG3 on experimenting with causal interventions. N.B. It can be observed that we choose FGVC Aircraft for performing some of the experiments requested by the reviewers. Our choice was motivated by the fact that the performance of our model on FGVC Aircraft is generally reflective of the general trends across other datasets, because of its challenging low intra-class and high inter-class similarities. Also, the size of the dataset is reasonable enough for us to complete all the experiments suggested by the reviewers within the rebuttal time frame. Pdf: /pdf/42c2c1004a7db7f196b73bb24e656a1b8aaf4958.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective
Reject
Summary: This paper introduces a novel approach to adversarial attacks that goes beyond traditional norm bounded attacks. Instead, the proposed method focuses on unrestricted attacks that are both effective and capable of preserving the semantic meaning of the input data. The method utilizes Langevin Monte Carlo techniques to sample from a distribution of potential attacks. To ensure semantic preservation, a learned energy function is employed, which guides the generation of adversarial samples. Rejection sampling and refinement techniques are then applied to select and further improve the quality of the generated samples. The evaluation of the proposed method demonstrates a significant success rate when attacking defended models. By allowing for unrestricted attacks while maintaining semantic integrity, this approach presents a promising advancement in the field of adversarial attacks, showcasing its effectiveness and potential for practical application. Strengths: 1. Interesting work on unrestricted adversarial attack, which is important given that most attacks now are bounded attack. 2. The method is effective in breaking already defended models. Fig 1,2 clearly shows the advantage over norm bounded attacks. Weaknesses: 1. What is the computation cost of the attack? The paper only evaluates on two toy datasets, MNIST and SVHN, the reviewer is wondering if the method can generalize to larger dataset. 2. Ablation study on the component is missing. Like TPS as data augmentaion, the effect of the choice of the sampling method. Also the method requires specify several hyper parameters, like M. Ablation study is useful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is the TPS augmentation used to capture the energy function of the semantic? Would TPS augmentation also work for other type of data, say semantic segmentation, where the location of the pixel matters a lot? Similar constraint functions are used for defending adversarial attacks, such as [1,2], but they use the constraint for defense. Can the author discuss if their attack can attack the dynamic defense in [1,2], where the defense will reverse the attack to the benign manifold? [1] Mao et al. Adversarial Attacks are Reversible with Natural Supervision. ICCV 2021. [2] Mao et al. Robust Perception through Equivariance. ICML 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing! Below is our response: ### Weaknesses 1. This attack costs more than traditional methods because we have to fit an energy-based model for each instance if we want to generate adversarial example based on this instance. In our global response, we added CIFAR10 experiment. 2. For the TPS augmentation, the noise follows a Gaussian distribution with a variance of 0.01. We refrained from conducting an ablation study on this parameter as an inappropriate selection could disrupt the training of the energy-based model. As for M, due to resource constraints, particularly the need for annotators to label the data, we were unable to perform an ablation study. ### Questions 1. The primary advantage of TPS augmentation is that it augments the dataset with similar data, facilitating a smoother training process for the EBM. 2. Yes, exactly. TPS augmentation is a commonly employed technique in image segmentation tasks. 3. Thank you for introducing these studies to me! There's a profound intrinsic connection between these works and ours. While they focus on the data manifold, we emphasize the data distribution. This distinction results in them utilizing the contrastive loss as a constraint function, while we employ probability. We are confident that our attack could effectively challenge this dynamic defense because **our generated adversarial examples maintain the essence of what they are attempting to reverse**. #### Reference [1] Mao et al. Adversarial Attacks are Reversible with Natural Supervision. ICCV 2021. [2] Mao et al. Robust Perception through Equivariance. ICML 2023. --- Rebuttal Comment 1.1: Title: Thank for the rebuttal. Comment: The reviewer thanks the authors for the rebuttal. While more interesting studies can be done in the future to address the limitations, the reviewer thinks this is a good initial step to introduce this new type of attack. --- Rebuttal 2: Title: Better Visual Result on CIFAR-10 Comment: Echoing the suggestions from reviewers oacq and tkih, we recognized the need to enhance the visual results on CIFAR-10. Through further experimentation, we found that by reducing the perturbation magnitude of TPS and incorporating scaling into $\mathcal{T}$, we achieved more visually appealing results for CIFAR-10. Adhering to the submission guidelines, I can't provide direct images or links here. Nonetheless, I've submitted the improved visuals to the area chair for review, and I anticipate that you will be able to access them soon. We hope these updates address your concerns more comprehensively. Your insights regarding TPS adjustment have been invaluable during this phase, and we are truly appreciative of your guidance. Nonetheless, it's worth noting again that a comprehensive ablation study on TPS's parameters was not feasible for us. Reducing the perturbation of the data makes the energy-based model more challenging to train, as highlighted in [1, 2]. #### Reference [1] Song, Yang, and Diederik P. Kingma. "How to train your energy-based models." arXiv preprint arXiv:2101.03288 (2021). [2] Grathwohl, Will, et al. "Your classifier is secretly an energy based model and you should treat it like one." arXiv preprint arXiv:1912.03263 (2019).
Summary: The adversarial examples generated by classical methods such as PGD have different semantic meaning to the original label, which means that the adversarial examples are easy to be distinguished by human. In this paper, the authors focus on the generalization of adversarial example which preserves the original semantic information. They propose a semantically-aware distance measure to replace the geometrical distance measure. And they use Langevin Monte Carlo method to find the minimal point (adversarial sample) of their proposed loss function. Several techniques that further enhance the performance of the proposed method are presented. From the experimental results, it seems that their generated examples preserve the original semantical imformation. Strengths: * As far as I know, the proposed adversarial attack method is novel. * They proposed a semantical distance measure to generate the semantic-aware examples. Although the idea of semantical measure already exists in many previous work, I think the usage here in adversarial example generalization scenario is interesting and reasonable. * Their method is theoretically and experimentally reliable. Weaknesses: * One of the limitation of this paper is that, the loss of semantics of adversarial examples only exists in some simple tasks, such as MNIST and SVHN. As the experimental results in previous work shows, the adversarial examples of CIFAR and ImageNet have very little disturbations that cannot be distinguished by human and preserve the semantical information. Hence, I think the significance of this paper is somewhat limited. * The motivation of using EBMs and LMC is not very clear to me. In my opinion, we can directly optimize the semantic-aware loss to generate the adversarial examples. The necessity of using the EBMs and LMC should be stated more clearly. * In the experiment part, the success rate involves subjective factors. They use human annotators to determine whether the adversarial examples have the same meaning as the original label. Is there a more subjective metric? Otherwise, the experimental results may suffer a credibility crisis. * More experiments on CIFAR-10 and CIFAR-100 are necessary. * Can you give a more detailed explaination of the training of the energy-based model? I noticed that Section 2.5 includes some brief introduction, but what is the data distribution $p_d$ here? What is the specific training algorithm? If the authors can address my concerns well, I will consider raise the score. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * I am confused when I read Line 69-70. Should $exp(g(x))$ be replaced by $exp(-g(x))$? Otherwise, the distribution $p(x)$ seems to concentrate around the global maximal. * If we use the semantric-aware adversarial examples to adversarially train the model, will the model be robust to the semantric-aware adversarial example? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks! Here are our responses to each of the points you've raised in your concerns: ### Weaknesses - Referring to Figure 1 in our global response PDF, the adversarial samples produced by PGD on CIFAR retain their semantics. However, the attacked images exhibit visual features, such as unnatural colors, which make them potentially detectable by humans. - Indeed, one can directly optimize the loss which is aware of semantics to produce adversarial examples. Yet, in this research, we introduce a principle probabilistic model from which adversarial examples can be sampled. The unnormalized adversarial distribution we've proposed employs LMC as an intuitive method for sampling from this distribution. Subsequently, the EBM is integrated to replace the distance distribution $p_{dis}$, thereby implicitly representing a semantic distance. We opt for EBM over other probabilistic models primarily because EBM inherently aligns well with LMC. For a more in-depth discussion, please refer to Question 2 by reviewer oacq. - In addition, we assess the transferability of our proposed method in our global response. - We've incorporated a CIFAR10 experiment in our global response. - We use the same training method as Du et al's work [1]. If we want to fit $p_{dis}(x_{adv}; x_{ori})$ by an energy-based model, then the dataset for training the energy based model is $\\{ t_1(x_{ori}), t_2(x_{ori}), ... \\}$, a dataset generated by a single image $x_{ori}$. The training details are introduced in [1]. ### Questions - Thank you for pointing out the typo. The term $-E_\theta$ in line 70 should be correctly noted as $E_\theta$. - In our scenario, the outcome hinges on the choice of $\mathcal{T}$ as introduced in line 122. If a model's training phase includes adversarial examples induced by $\mathcal{T}$ , then this model will exhibit robustness to adversarial examples generated by the same set of transformations $\mathcal{T}$. However, it's unlikely that the model will remain robust against adversarial attacks prompted by a different set of transformations, $\mathcal{T}'$. #### Reference [1] Du, Yilun, and Igor Mordatch. "Implicit generation and generalization in energy-based models." arXiv preprint arXiv:1903.08689 (2019). --- Rebuttal Comment 1.1: Comment: Thank the authors for their responses. Their response partially addressed my concerns and I maintain the rating. --- Reply to Comment 1.1.1: Title: Better Visual Result on CIFAR-10 Comment: Thank you for your feedback! Echoing the suggestions from reviewers oacq and tkih, we recognized the need to enhance the visual results on CIFAR-10. Through further experimentation, we found that by reducing the perturbation magnitude of TPS and incorporating scaling into $\mathcal{T}$, we achieved more visually appealing results for CIFAR-10. Adhering to the submission guidelines, I can't provide direct images or links here. Nonetheless, I've submitted the improved visuals to the area chair for review, and I anticipate that you will be able to access them soon. We hope these updates address your concerns more comprehensively.
Summary: This paper proposes to generate semantics-preserving adversarial examples by framing the construction of adversarial examples as a box-constrained non-convex optimization problem. More specifically, the authors propose a Langevin Monte Carlo (LMC) technique to craft adversarial examples that preserve the meaning of the original inputs they are derived from. With this framing, they cast the generation of adversarial examples as a semantic-based probabilistic distribution. The authors showed that their semantic-aware adversarial attack is capable of fooling robust classifiers while preserving most of the semantics of their source images. Strengths: This paper is quite interesting paper and well-written. The problem is well-defined, and the solution quite intuitive. The math is also quite sound. Although the problem of generating semantics-preserving adversarial examples has been studied extensively in the past, it still remains relevant. This paper proposes another interesting perspective on how to approach this problem. Weaknesses: Although the paper is interesting, the evaluation is quite limited. For instance, the approach is only evaluated on MNIST and SVHN. Evaluating the approach against "more challenging" datasets like ImageNet, CIFAR-10, CIFAR-100 would make their contributions more compelling. Also, studying the transferability property of their attacks would strengthen their paper, and give more confidence to the readers about the strength of their attacks. Moreover, I would have liked to see how the magnitude of the noise used in Thin-plate-spine affects the overall performance of their attacks. Finally, the related work section is rather limited. There is a plethora of interesting studies in crafting adversarial examples that are semantics-preserving. For instance, [1] and [2] are quite related to the approach the authors propose, and should be evaluated or discussed further in the related work section. [1]: Semantics Preserving Adversarial Examples. https://aisecure-workshop.github.io/amlcvpr2021/cr/27.pdf [2]: Localized Uncertainty Attacks. https://ui.adsabs.harvard.edu/abs/2021arXiv210609222A/abstract Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would highly recommend the authors to further experiment with datasets like ImageNet, CIFAR-10, CIFAR-100, etc., to study the transferability property of their adversarial attacks, and to improve the related work section by comparing their approach against relevant approaches that were proposed in the past. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing! Below is our response: > ... the approach is only evaluated on MNIST and SVHN ... Also, studying the transferability property of their attacks would strengthen their paper, and give more confidence to the readers about the strength of their attacks. We've incorporated a CIFAR10 experiment and evaluated transferability in our global response. > Moreover, I would have liked to see how the magnitude of the noise used in Thin-plate-spine affects the overall performance of their attacks. The noise is Gaussian distribution with variance 0.01. We do not do any ablation study on this parameter because the a bad choice of this parameter may clash the training of energy based model. > Finally, the related work section is rather limited. In light of your feedback and input from other reviewers, we will provide a more comprehensive discussion on related work in our updated version. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal and appreciate the additional experiments they ran. After carefully examining the adversarial examples generated from the CIFAR-10 dataset, it's fair to say that this method is not that semantics-preserving as the images appear quite distorted. The paper is still interesting nonetheless. Maybe some of the claims could be watered down a bit, and the limitations clearly specified in the manuscript. --- Reply to Comment 1.1.1: Title: Feedback Response Comment: Thank you for your feedback. Based on our understanding, Figure 1 in the attached PDF suggests that **TPS might not be the optimal data augmentation method for preserving CIFAR10's semantics**. However, this does not invalidate our overall approach. As mentioned in section 3.2 of the submitted paper: > In practice, the choice of $\mathcal{T}$ depends on human subjectivity related to the dataset. Individuals are able to incorporate their personal comprehension of semantics into the model by designing their own $\mathcal{T}$. If we consider TPS as a suitable method for preserving CIFAR10's semantics, meaning that the distortions introduced by TPS don't hinder our understanding of the image's intent, then Figure 1 in our attached PDF is in alignment with our intentions. While distortions in hand-written digits don't inhibit our ability to identify the digit, the images on the right side of Figure 1 might appear unnatural to human viewers. This is especially true for objects that typically have a defined structure, such as cars, trucks, and ships. Additionally, the perspective from which the object is viewed can influence this perception. Our assertion that we "transcends the restriction imposed by geometric distance, instead opting for semantic constraints" is underpinned by the mathematical framework presented in section 3.1. The perceptual incongruities evident in Figure 1 arise primarily from the choice of TPS as a data augmentation method and its potential effects on semantics, rather than an inherent flaw in our proposed method. Our method provides a pathway for individuals to embed their subjective understanding of semantics via data augmentation, represented by $\mathcal{T}$. Yet, if this interpretation doesn't resonate with general human semantic perception, the resulting images may be suboptimal. As you've pointed out, we will highlight this nuance in the limitations section of our paper.
Summary: In this work, a probabilistic view of adversarial examples based on the [projected stochastic gradient Langevin algorithm](https://proceedings.mlr.press/v134/lamperski21a.html) is introduced and used as an optimization algorithm instead of the SGD or Adam optimizer for adversarial examples. In addition, the geometric constraint (Lp norms) is replaced by a semantic distance criterion based on an instance-wise energy-based model (i.e., an EBM is trained for each instance, using transformed versions as the training dataset) to ensure semantic/visual proximity to the original input. They improved the adversarial examples using the [CW objective](https://www.computer.org/csdl/proceedings-article/sp/2017/07958570/12OmNviHK8t) and thin-plate splines transformation to create a more diverse training dataset for EBM training. Moreover, they generated a set of successful adversarial attacks (i.e., fooled the classifier) via rejection sampling and proposed a simple selection procedure to select the final adversarial examples based on the softmax probabilities of an auxiliary classifier and the energy of the examples. The experiments show that the proposed method is able to generate adversarial examples that fool the classifier while being visually/semantically indistinguishable to humans. Strengths: - The proposed method is very detailed and intricate. - The Langevin Monte Carlo-based optimization procedure seems to improve the quality of adversarial examples overall. - The paper is well-written and clearly structured. - Code is provided. Weaknesses: - Previous work, e.g., by [Sharma & Chen](https://openreview.net/forum?id=Sy8WeUJPf), has also generated visually similar adversarial examples for the MadryNet while still using a geometric distance ([elastic-net regularization](https://arxiv.org/abs/1709.04114)). This raises questions about the generality of the work’s central claim that it “transcends the restriction imposed by geometric distance, instead opting for semantic constraints” (L4-5) beyond the limitations of the adversarial attack methods shown in the present work. - The present work only shows experiments on digit-based datasets (MNIST & SVHN). Applications to datasets with natural images (e.g., CIFAR or ImageNet) are missing. Consequently, the necessity and applicability of the proposed adversarial attack are very unclear, since for natural images the adversarial examples typically remain visually very close to the original inputs; also after adversarial fine-tuning. - The work is missing interesting experiments, e.g., what would happen if we use the proposed adversarial attack approach for adversarial training? Does it improve adversarial robustness? Does the adversarial attack also bypass certified defenses? Overall, the experimental section is very short (3 lines of results) and would greatly benefit from, e.g., the aforementioned experiments. - The approach requires an instance-wise energy-based model for its semantic distance loss, which must be trained for every sample (on different augmented versions); cf. L122. This may limit its applicability. - The proposed attack and problem setup are not quite original, i.e., it combines well-known techniques, or previous work (see first point above) has also already targeted the visual similarity challenge of adversarial examples for adversarially fine-tuned models. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: - Couldn't we just train a single energy-based model for the specific data domain? If so, how do the generated adversarial examples compare to those using an instance-based energy-based model for semantic divergence? - Why do the authors refrain from using (currently) better generative methods? - Regarding Fig. 2: are the samples generated using only the classifier (neglecting the distance distribution term) for a & b and vice versa for c? ## Suggestions - The related work could include more discussion on previous works on adversarial examples. - For the minimization problem formulations in Sec. 2.1., it’d be good to include that $x_{adv}$ is minimized (even though it’s obvious given the work’s scope). - It’d be meaningful to include error bars for the experiments. - Results for Song et al in Tab. 1 should be repeated for better comparability, if possible. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks! Here are our responses to each of the points you've raised in your concerns: ### Weakness - The assertion that our method 'transcends the restriction imposed by geometric distance' is based on a theoretical perspective, as outlined in lines 112-116. In this context, geometric distances lead to certain specific distributions for $p_{dis}$. For instance, an L2 norm corresponds to a Gaussian distribution, while an L1 norm aligns with a Laplace distribution. The elastic-net attack you mentioned leverages a weighted sum of L1 and L2 distances, which in turn leads to a weighted product of a Laplace distribution and a Gaussian distribution for $p_{dis}$. Indeed, by selecting an optimal weight $\beta$, the elastic-net strategy demonstrates effective results in attacking Madrynet, thereby highlighting the good performance of the induced $p_{dis}$. However, the $p_{dis}$ that arises from geometric distance represents just a minuscule portion of all possible $p_{dis}$ choices. We posit that as probabilistic generative models continue to evolve, data-driven $p_{dis}$ models should be able to provide superior performance. - We have incorporated CIFAR10 experiments in our global response, see Figure 1 of the attached PDF. - Kindly consult Table 1 and Table 2 in the PDF attached to our global response. Our technique can successfully circumvent certified defenses. Additionally, we showcase the transferability of our proposed attack strategy in these tables. We refrained from incorporating our generated adversarial examples into a subsequent adversarial training process. This is because our attack relies on the data augmentation, denoted as $\mathcal{T}$. Consequently, we don't anticipate that a newly adversarially trained model would exhibit enhanced defense against conventional attacks. - Indeed, training an energy-based model for each $x_{ori}$ is a limitation of our study. However, we believe there are scenarios where generating a few adversarial examples is both sufficient and crucial. Moreover, we have also proposed a method requiring only one energy-based model per domain, as highlighted in the first bullet point of the 'Questions' section of this response. - We firmly believe that our proposed methodology stands out both in principle and novelty. Our approach introduces a fresh, elegant probabilistic perspective by factoring the adversarial distribution into $p_{vic}$ and $p_{dis}$. The notion of $p_{vic}$ resonates with the idea that "samples can be drawn from adversarially trained classifiers", while $p_{dis}$ aligns with the concept of geometric distance, especially when pertaining to Gaussian or Laplace distributions. Moreover, the technique of using a probabilistic model to fit $p_{dis}$ is also a groundbreaking addition. While numerous studies employ generative models in the adversarial domain, we assert that our method offers a distinctive blend of generative modeling and adversarial attack, making it both innovative and elegant. ### Questions - Yes, we can. If we consider each domain as a class, the setup aligns perfectly with Song's configuration, as depicted in formula (8). For a visual representation of the unrestricted adversarial examples generated under this setting, please see Figure 2a in our global response PDF. - We use EBM because it can directly model the unnormalized distribution (formula (3)). For other popular generative models, GANs can not provide $p(x)$, VAEs can only give a lower bound of $p(x)$, and diffusion models attempt to fit a probabilistic model based on a noise-altered $x$, denoted as $p(\tilde{x})$. While both Normalizing flow and PixelCNN can provide $p(x)$, practically speaking, generating quality samples through Langevin dynamics on their gradient $\nabla\log p(x)$ is challenging. Although these challenges might be mitigated with certain modifications, our paper's primary objective is to introduce a novel probabilistic perspective on adversarial attacks. Utilizing Langevin dynamics offers a straightforward method for sampling from $p_{adv}$, and employing the EBM to model $p_{dis}$ ensures precision and elegance throughout the model. While this might not guarantee peak performance, we believe it establishes a strong foundation for this model series. - Yes, exactly. Adversarially trained classifiers have generation ability; cf. L110. That is why decomposing $p_{adv}$ into $p_{vic}$ and $p_{dis}$ is a logical approach. ### Suggestions - We'll expand our discussion on related work in response to your feedback and suggestions from other reviewers. - This will be included in our updated version. - In line with the evaluation methodology of Song et al.'s work, we consolidated the results of five annotators. Therefore, providing error bars at this juncture isn't feasible for us. - Based on their GitHub repository, it appears that Song et al.'s work cannot be replicated precisely due to the absence of the adversarial training component. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their detailed rebuttal. Specifically, I appreciate the additional CIFAR-10 experiments, the extension of EBMs from an instance-wise to a class-wise attack design and bypassing geometric certified defenses. Below, I address some of my concerns that still persist after the authors' rebuttal. > Severely deformed visual results for CIFAR-10 (e.g., car or horse class) Although I appreciate the experiments on CIFAR-10, they unfortunately provide empirical confirmation of one of my main concerns that the method does not work as well on natural images and thus violates the main goal of this work (“our semantics-aware adversarial attack is capable of deceiving robust classifiers while preserving most of the original image’s semantics” L32-33). While I tend to agree with the former (“deceiving”), I disagree with the latter (“preserving”). For example, the CIFAR classes for cats, dogs, horses, ships, and trucks are severely deformed (interestingly with a similar blurred curve-like pattern?) but PGD with L2 norm is also able to reliably deceive the classifier (except for some truck examples). In my personal evaluation, I would have chosen PGD with L2 norm as the most semantically preserving in Figure 1 in at least 6 out of 10 cases. I am very confident that other people would have similar choices. > “Moreover, the technique of using a probabilistic model to fit [a distance distribution] is also a groundbreaking addition” I tend to disagree and kindly refer to some previous works, e.g., [1]. However, I acknowledge that the practical application of a semantic distance loss to adversarial examples is original; as already mentioned in my original review. > “The assertion that our method 'transcends the restriction imposed by geometric distance' is based on a theoretical perspective, as outlined in lines 112-116.” I was/am aware of these mentioned lines but there are many places of the present manuscript, where it sounded more like a fundamental limitation of Lp distance norms, e.g., L18-22 (introduction). But it is not, as also acknowledged by the authors. Thus, I’d suggest clarifying such statements because it may mislead readers of the work. --- [1] Larsen, Anders Boesen Lindbo, et al. "Autoencoding beyond pixels using a learned similarity metric." ICML 2016. --- Reply to Comment 1.1.1: Title: Addressing concerns Comment: Thank you for your feedback! Please allow us to provide further clarification: ### The CIFAR10 result > In my personal evaluation, I would have chosen PGD with L2 norm as the most semantically preserving in Figure 1 in at least 6 out of 10 cases. I am very confident that other people would have similar choices. We acknowledge that the images presented on the right side of Figure 1 might not resonate with human semantic perceptions. However, this doesn't invalidate the methodology we proposed. As mentioned in section 3.2 of the submitted paper: > In practice, the choice of $\mathcal{T}$ depends on human subjectivity related to the dataset. Individuals are able to incorporate their personal comprehension of semantics into the model by designing their own $\mathcal{T}$. In the experiment presented in Figure 1, we employed TPS as our data augmentation method, denoted as $\mathcal{T}$. This implies that we initially assumed that the TPS augmentation, even with its significant distortions, wouldn't alter the semantics. Thus, the resulting distorted images were **in line with our expectations**. The visual discomfort elicited by Figure 1 **solely challenges the assumption that TPS doesn't impact the semantics of CIFAR10**. ### About the distance distribution In my original rebuttal, I utilized the mathematical notation $p_{dis}$, rather than the phrase "a distance distribution", because this $p_{dis}$ is referred to the distance distribution correspond to geometric distance, e.g. Lp norm in adversarial attack context. We believe using a probabilistic model to fit this $p_{dis}$ is novel, it is not only "the practical application of a semantic distance loss to adversarial examples". We have a probabilistic perspective on this and it is not just a semantic distance loss. Thank you for bringing up VAE-GAN [1] as an example to argue that replacing a Gaussian distribution behind the L2 norm with a probabilistic generative model might not be unprecedented. While, on a high level, there may be some resemblance, it's crucial to note that VAE-GAN doesn't utilize a probabilistic generative model per se; instead, it employs a discriminator to provide a similarity metric. Probabilistic modeling is a broad concept, and crafting a specific probabilistic model doesn't inherently imply 'combining well-known techniques' as you've pointed out. Take, for example, the acclaimed Stable Diffusion [2]. It uses a diffusion model to represent the prior of the latent variable, whereas during the VAE era, this was commonly assumed to be a standard Gaussian distribution. Nonetheless, Stable Diffusion stands out as both innovative and efficacious. ### Lp distance norms should be challenged > The present work only shows experiments on digit-based datasets (MNIST & SVHN). ... Consequently, the necessity and applicability of the proposed adversarial attack are very unclear, We contend that experiments on digit-based datasets are essential. Given that the diversity of digits is markedly less than that of natural images, adversarially trained classifiers for digits tend to be more resilient to attacks. Consequently, it can be argued that targeting digit classifiers presents a greater challenge than targeting classifiers designed for natural images. > since for natural images the adversarial examples typically remain visually very close to the original inputs; 'Good enough' should not be a reason to halt further research. From our probabilistic perspective, the L2 attack can be mathematically equivalent to viewing $p_{dis}$ as a Gaussian distribution. This corresponds to the implicit assumption that 'minor Gaussian noise doesn't alter semantics.' But does this mean Gaussian noise is the optimal solution for this problem? Our findings indicate that, for digit datasets, a data-driven distribution shaped by TPS augmentation outperforms the Gaussian distribution. While CIFAR10 experiments imply that the Gaussian model might be more suitable for natural images than TPS, the pursuit of a superior augmentation should continue. Our work lays a solid foundation for this exploration. #### Reference [1] Larsen, Anders Boesen Lindbo, et al. "Autoencoding beyond pixels using a learned similarity metric." ICML 2016. [2] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.
Rebuttal 1: Rebuttal: We extend our gratitude to all reviewers for their insightful feedback. Attached is a PDF containing relevant figures and tables for your reference. During the rebuttal phase, we've incorporated four additional experiments to enrich our original manuscript: ### CIFAR10 experiment We've introduced an experiment using CIFAR10. A comparison between our method and PGD is depicted in Figure 1, with refined samples presented in Figure 2b and Figure 2c. ### Unrestricted adversarial examples Aligning with the methodology in Song et al.'s study [1], we generated unrestricted adversarial examples using our approach. This process corresponds to formula (8) from our submission. For this purpose, we employed an energy-based model for each image class. ### Attack on certified defences To highlight our method's efficacy, we employed it to target a certified defense [2]. Drawing a parallel to Song's approach, by overlooking the constraints of geometric distance, the theoretical boundary of such a defense becomes ineffective. This might be a contributing factor to our method's ability to sidestep certified defenses. Relevant outcomes are available in Table 1 and Table 2. ### Transferbility The transferability of our proposed approach has also been assessed and can be viewed in Table 1 and Table 2. ### References (as linked in the attached PDF) [1] Y. Song, R. Shu, N. Kushman, and S. Ermon. Constructing unrestricted adversarial examples with generative models. Advances in Neural Information Processing Systems, 31, 2018. [2] E. Wong and Z. Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International conference on machine learning, pages 5286–5295. PMLR, 2018. Pdf: /pdf/d7bddda664ddfc482bf0f6d668a6de9021b5bb9f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improving Diffusion-Based Image Synthesis with Context Prediction
Accept (poster)
Summary: This paper proposes to improve diffusion-based image synthesis by explicitly reinforcing each point to predict its neighborhood context during training, without extra cost at inference. To reduce computation/time complexity of context decoding the authors propose efficient large context decoding adopting Wasserstein distance to characterize the distribution reconstruction loss. The method is applicable to both discrete and continuous diffusion backbones and achieves new SOTA text-to-image generation on MS-COCO with FID 6.21 Strengths: 1. The paper is well-written and easy to follow. 2. The method of explicitly reinforcing each point to predict its neighborhood context for diffusion models is well-motivated with effective designs to reduce substantial computation complexity for large-context neighborhood reconstruction. 3. The method is proven effective in boosting FID scores on MS-COCO text-to-image synthesis for both continuous (eDiff-I) and discrete diffusion (VQ-Diffusion) backbones. Weaknesses: 1. The main results are on text-to-image synthesis and image inpainting. It would be good to add unconditional generation results. 2. The method emphasizes on diffusion with better neighbouring context, leading to generations "semantically better consistent with the text prompts"(L233), "prommising cross-modal semantic understanding" (L234), "can synthesize more complex objects and scenes" (L236-237). I don't think the claims are well-justified: e.g. Fig. 2 and Fig. 3 only presents results of the proposed method without any comparison to warrant the aforementioned conclusions. More analysis and evidence of "better semantics" are required other than the overall FID score. 3. Some notations are misleading, e.g. L 114-116, for h_i (h_{t-1}), the subscript is used to indicate both spatial and time; x_t is not defined in the main text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. L260-262: regarding to Fig. 5 I'm not convinced by the observation and conclusion, could the authors make it more clear ? 2. I'm curious about the comparision with Dalle-2 and Imagen in Fig. 7 as the models are not open-sourced. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer 2mXr for the positive review and valuable feedback. We are glad that the reviewer found that the paper is well-written and easy to follow, the method is well-motivated with effective designs, and the method is proven effective in boosting FID scores with both continuous and discrete diffusion backbones. Please see below for our responses to your comments.* **Q1: The main results are on text-to-image synthesis and image inpainting. It would be good to add unconditional generation results. More evidence of "better semantics" are required other than the overall FID score.** A1: We have listed unconditional generation results on four datasets in the appendix, and consistently outperform previous methods. More qualitative comparison results for demonstrating "better semantics" of our method can be found in **global response pdf**. From the visual comparison, we conclude that our ConPreDiff can sufficiently capture the semantics in the text prompt, and can better express them in the generated images compared to powerful LDM and Imagen. **Q2: Some notations are misleading, e.g. L 114-116, for $h_i (h_{t-1})$, the subscript is used to indicate both spatial and time; x_t is not defined in the main text.** A2: $h_{t-1}$ and $x_t$ are used to denote the predicted point and previously-predicted image in the diffusion process, respectively. We will make the notations more clear, thanks for your constructive suggestions. **Q3: L260-262: regarding to Fig. 5 I'm not convinced by the observation and conclusion, could the authors make it more clear?** A3: As demonstrated in Fig.5, the neighborhood decoding error, the point-wise reconstruction error, and FID score consistently decline in the training process. The phenomenon means that our "context prediction" term and standard "point-wise reconstruction" term are cooperative, they simultaneously make better FID score. We theoretically prove this cooperation (from the perspective of maximizing ELBO/likelihood) as the following: At time t: Loss of DDPM ( after reparamterization and scaling): $||x_0-\hat{x}_0(x_t,t)||_2^2$ Loss of ConPreDiff: $\sum\_{i = 1}^{x*y} [\mathcal{M}\_p(x\_0^{i} ,\hat{x}\_0^{i})+ \mathcal{M}\_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})]$ We let $\mathcal{M}_p,\mathcal{M}_n$ be square loss, $\mathcal{M}_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})=\sum\_{j\in \mathcal{N}\_i}(x\_0^{i,j}-\hat{x}\_0^{i,j})^2$, where $x_0^{i,j}$ is the j-th neighbor in the context of $x_0^i$ and $\hat{x}_0^{i,j}$ is the prediction of $x_0^{i,j}$ from a denoising neural network. Following the notation in the main paper, we have $\hat{x}_0^{i,j} = \psi_n(\psi_p(x_t,t)(i))(j)$, where $\psi_p(x_t,t)(i)$ is the prediction of $x_0^i$ and $\psi_n$ is the neighborhood decoder. Compactly, we can write the denoising network as: $$\Psi(x_t,t)(i,j) =\left\\{ \begin{array}{l}\psi_n(\psi_p(x_t,t)(i))(j), & j \in \mathcal{N}\_i, \\\\ \psi_p(x_t,t)(i), & j=i \end{array} \right.$$ We can show that the DDPM loss is upper bounded by ConPreDiff loss, by reparameterizing $\hat{x}_0(x_t,t)$. Specifically, for each unit i in the feature map, we predict the unit i and its neighbors using equation (1), and then we use the mean of predicted value in the neighborhood as the final prediction: $$\hat{x}\_0(x_t,t)(i) = 1/(|\mathcal{N}\_i|+1)*\sum\_{j \in \mathcal{N}\_{i}\ \cup\ \\{i\\}}\Psi(x\_t,t)(i,j)$$ Now we can show the connection between DDPM loss with the mean prediction and ConPreDiff loss: $$\begin{array}{rl}||x_0-\hat{x}\_0(x_t,t)||_2^2 & = \sum_i (x_0^i-\hat{x}\_0(x_t,t)(i))^2,\\\\ &=\sum_i (x_0^i-\sum\_{j \in \mathcal{N}_i\cup\{i\}}\Psi(x_t,t)(i,j)/(|\mathcal{N}_i|+1))^2,\\\\ &=\sum_i(\sum\_{j \in \mathcal{N}_i\cup\{i\}}(\Psi(x_t,t)(i,j)-x_0^i))^2/(|\mathcal{N}_i|+1)^2, \\\\ (Cauchy\ inequality) & \leq \sum_i \sum\_{j \in \mathcal{N}\_i\cup\{i\}} (\Psi(x_t,t)(i,j)-x_0^i)^2/(|\mathcal{N}_i|+1), \\\\ &=1/(|\mathcal{N}\_i|+1)\sum_i [(x_0^i- \psi_p(x_t,t)(i))^2+\sum\_{j \in \mathcal{N}\_i} (x_0^{i,j}-\hat{x}\_0^{i,j})^2] \end{array}$$ in the last equality, we assume that the feature is padded so that each unit i has the same number of neighbors $|\mathcal{N}|$. As a result, the ConPreDiff loss is an upper bound of the negative log likelihood. Thus both terms are cooperative for optimizing the generation quality of the model. **Q4: The comparison with Dalle-2 and Imagen.** A4: For the results of Dalle-2 and Imagen, we directly adopt published results from the papers for fair comparison. We adopt the codes reproduced by open source community, and further modify them to implement our ConPreDiff. And we are committed to open sourcing all the codes and trained models for all datasets and all tasks upon the acceptance. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I've raised my rating accordingly.
Summary: This paper proposes to improve diffusion based image generative training objectives by adding context prediction loss. The motivation of predicting context comes from other non-diffusion based models like semantic segmentation and representation learning. To mitigate the complexity of predicting large per pixel neighbourhood context, the author further models the context as a probability distribution using Wasserstein distance. Experiments show the proposed model achieves new SoTA generation on MSCOCO for both discrete and continuous diffusion models. Strengths: The introduction is well-written and the motivation of predicting context in the diffusion based model is easy to follow. The presentation of the method including the loss derivation and the training pipeline is easy to understand. The authors conduct intensive experiments showing the proposed context prediction loss can be used on various DM models, achieving SoTA performance on MSCOCO FID and inpainting tasks. Weaknesses: The paper lacks training and implementation details. For example, the text-to-image experiment uses T5 encoder as the text encoder, but did not mention architecture details and training details. One big motivation to model context as a probability distribution is to improve the training efficiency. As shown in Figure 6, feature matching has lower throughput compared to distribution matching, but it has better FID. I think an important baseline is missing – sampling based feature matching, i.e. use the same number of random samples as proposed in the distribution matching, 9 instead of the full neighbourhoods features for context prediction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Table 1, it is not clear which diffusion model is used with the proposed method for both discrete and continuous diffusion models. For qualitative comparison in Figure 2 and 8, there is no head to head comparison with other baselines, and it is hard to appreciate the improvement. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no limitation/future work discussion in the paper And there is also no training / implementation details in the paper, which cause concerns on reproducibility. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer ydZz for the positive review and valuable feedback. We are glad that the reviewer found that the introduction is well-written, the motivation is easy to follow, the presentation of the method including the loss derivation and the training pipeline is easy to understand, and the conducted experiments are extensive. Please see below for our responses to your comments.* **Q1: The paper lacks training and implementation details. For example, the text-to-image experiment uses T5 encoder as the text encoder, but did not mention architecture details and training details.** A1: We provide some important implementation details in section 4.1. Because our ConPreDiff can generalize to both continuous and discrete diffusion models, our training and implementation details mainly follow the diffusion models that we generalize to for fair comparisons. Here we provide more details for clarity: For text-to-image tasks, like Imagen, an Efficient U-Net architecture with 22 resnet blocks at total is utilized. A frozen T5-xl encoder is utilized as our text encoder. The text encoder is a 12 layers Transformer with 32-head multi-head attention and is pre-trained on a C4 text-only corpus with our context prediction based denoising objective. We use the standard Adam optimizer with 1e-4 learning rate. We use the same cosine noise schedule as the Imporved DDPM [1]. For inpainting tasks, we adopt a same pipeline as RePaint, and a smaller U-Net architecture with 12 resnet blocks at total is utilized. The image size is 256x256. We use T = 250 time steps, and applied r = 10 times resampling with jumpy size j = 10. For unconditional generation tasks, like LDM, we use a 8 layers U-Net. The max channels are 224. We use T=2000 time steps and linear noise schedule. The initial learning rate is 9.6e-5. **We are committed to open sourcing all the codes and trained models for all datasets and all tasks upon the acceptance.** [1] Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference on Machine Learning. PMLR, 2021: 8162-8171. **Q2: Adding another baseline – sampling based feature matching, i.e. use the same number of random samples as proposed in the distribution matching, instead of the full neighborhood features for context prediction.** A2: Following your suggestions, we conduct fast experiments with sampling based feature matching, on the CelebA-HQ dataset. Compared with our approach, it predicts neighbors in a faster way, but it performs much worse than our "context prediction" which is based on distribution decoding, because our ConPreDiff adopts a surrogated loss of the entire neighborhoods matching. And our approach can achieve a better trade-off between the FID score and training cost. |CelebA-HQ |LDM| LDM + Context Prediction| LDM + Sampling-based Feature Matching| | :-----| :----: | :----: |:----: | |Training time (sec/step) | 4.02 | 4.79| 4.22 | |FID score| 5.11 | **3.22**| 4.35 | **Q3: In Table 1, it is not clear which diffusion model is used with the proposed method for both discrete and continuous diffusion models.** A3: In the implementation details of section 4.1, we introduce that we generalize our ConPreDiff to discrete diffusion models (Improved VQ-Diffusion) to form our discrete $\text{ConPreDiff}\_{dis}$ and to continuous diffusion models (DALL-E 2, Imagen) to form our continuous $\text{ConPreDiff}\_{con}$. **Q4: More qualitative comparisons to appreciate the improvement.** A4: More qualitative comparison results can be found in **global response pdf**. From the results, we can easily find that our ConPreDiff can better express the local contexts and consistent semantics in the generated images compared to other methods. **Q5: Limitation/future work.** A5: We will add these in final version. Limitation/future work: While our ConPreDiff boosts performance of both discrete and continuous diffusion models without introducing additional parameters in model inference, our models still have more trainable parameters than other types of generative models, e.g GANs. Futhermore, we note the long sampling times of both $\text{ConPreDiff}\_{dis}$ and $\text{ConPreDiff}\_{con}$ compared to single step generative approaches like GANs or VAEs. However, this drawback is inherited from the underlying model class and is not a property of our context prediction approach. Neighborhood context decoding is fast and incurs negligible computational overhead in training stage. For future work, we will try to find high-order and more intrinsic information to preserve for improving existing point-wise denoising diffusion models. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. The response addressed my concerns and I would like to keep my original rating.
Summary: This paper presents ConPreDiff, a method introduced to improve the performance of diffusion models by preserving the neighborhood context of predicted pixels/features. They achieve this by predicting the neighborhood context during the diffusion generation process. To simplify the modeling complexity, they propose predicting distributions instead of directly reconstructing the neighborhood. The method's effectiveness is demonstrated through extensive experiments on unconditional image generation, text-to-image generation, and image inpainting. Strengths: * The idea is intuitive and easy to understand. * The proposed method is general and can be easily applied to recent diffusion models. * The performance of the proposed method is very impressive. Weaknesses: * Recent diffusion models use UNet backbone, which stacks many convolutional and self-attention layers. Thus, it has a large receptive field. Additionally, LDM also has a decoder, which also has a decent receptive field. Therefore, I am confused about the paper's main claim that the point-wise reconstruction neglects to fully preserve the local context. * There are no visual comparisons of the proposed method and baselines. I am not sure if ConPreDiff can really be more local-context consistent compared to other methods. * The authors need to discuss the additional training cost. Besides, they also need to provide the additional parameters they use for the context prediction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * L138: Why do you choose $\mathcal P_{\mathcal N_i^s}$ in this form? Is there any insight that motivates you doing this? * L115: What is $\mathbf h_{t-1}$? I suppose it should be $\mathbf x_{t-1}$. * L119: What is $\mathbf h_{t-1}$? I think it should be $\mathbf h_i$. * L138: What is $h_u^{(0)}$? * Figure 4: What is NDM? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss their limitations and societal impact in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer DpwQ for your valuable feedback. We are glad that the reviewer found that our idea is intuitive and easy to understand, the proposed method is general and can be easily applied to recent diffusion models, and the performance of the proposed method is very impressive. Please see below for our responses to your comments.* **Q1: Being confused about the paper's main claim that the point-wise reconstruction neglects to fully preserve the local context.** A1: Although existing UNet-based diffusion models has a decent receptive filed, the process of acquiring receptive field is progressive, where the each point's local context information of previous layers may partially reduce or be filtered by multiple non-linear and pooling functions. Thus we explicitly add "context prediction" term at the end of the denoising network to explicitly reconstruct local context for each point,and maximally preserve useful context information from the perspective of reconstructing neighborhood distribution. **Q2: More visual comparison.** A2: More visualization comparison results can be found in **global response pdf**, from which we can easily find that our ConPreDiff can better express the local contexts and consistent semantics of text prompts in the generated images compared to other methods. **Q3: The authors need to discuss the additional training cost. Besides, they also need to provide the additional parameters they use for the context prediction.** A3: We report the approximate time cost (seconds) of these models to train in a single step on the MS-COCO dataset in the table below. The additional context prediction head (two linear-BN-ReLU modules) added to the models accounts for **only a small portion of the parameters (around 0.13M)**, and the neighborhood distribution decoding of our method also does not incur significant additional training costs. Notably, our "context prediction" head can be removed in inference stage without introducing extra testing costs. |Training cost (secs/step) |LDM | DALL·E 2| Improved VQ-Diffusion| Imagen| | :-----| :----: | :----: |:----: |:----: | |Original Model | 11.9 | 74.7 | 83.9 |198.3 | | + Context Prediction | 13.1 | 77.5 |87.6 |206.3 | **Q4: (1)** Why do you choose P_NS in this form? Is there any insight that motivates you doing this? **(2)** L115: What is $h_{t-1}$? I suppose it should be $x_{t-1}$. **(3)** L119: What is $h_{t-1}$? I think it should be $h_i$. **(4)** L138: What is $h_{u}^{(0)}$? Figure 4: What is NDM? A4: **(1)** In L138, $P_{N^s_i}$ is defined by uniform distribution, which means we sample neighbors from local context using equal probabilities. The main insight is after the distribution decoding, neighbors are sampled without spatial orders and ground truth neighbors in a local area are all important to local semantics. Thus we equally treat sampled neighbors for reconstruction. **(2)** $h_{t-1}$ denotes one point in $x_{t-1}$, which is equal to $h_i$. And we represent each point of $x_{t-1}$ in both time form ($h_{t-1}$) and spatial form ($h_{i}$) for better explanations. **(3)** "(0)" is the superscript of $h_u$, which denotes the center point of neighborhoods $N^s_{i}$ (i.e., $h_i$). **(4)** NDM is a typo, we previously name our model Neighborhood Diffusion Model (NDM). **Q5: The authors did not discuss their limitations and societal impact in the paper.** A5: We will add these in final version Limitations: While our ConPreDiff boosts performance of both discrete and continuous diffusion models without introducing additional parameters in model inference, our models still have more trainable parameters than other types of generative models, e.g GANs. Futhermore, we note the long sampling times of both $\text{ConPreDiff}\_{dis}$ and $\text{ConPreDiff}\_{con}$ compared to single step generative approaches like GANs or VAEs. However, this drawback is inherited from the underlying model class and is not a property of our context prediction approach. Neighborhood context decoding is fast and incurs negligible computational overhead in training stage. For future work, we will try to find high-order and more intrinsic information to preserve for improving existing point-wise denoising diffusion models. Broader Impact: Recent generative image models enable creative applications and autonomous media creation, but can also be viewed as a dual-use technology with negative implications. In this paper, we use human face datasets only for evaluating the image inpainting performance of our method, and our method is not intended to create content that is used to mislead or deceive. However, like other related image generation techniques, it could still potentially be misused for impersonating humans. A notorious example are so-called “deep fakes” that have been used, for example, to create pornographic “undressing” applications. We condemn any behavior to create misleading or harmful content of a real person. Furthermore, the immediate availability of mass-produced high-quality images can be used to spread misinformation and spam, which in turn can be used for targeted manipulation in social media. Datasets are crucial for deep learning as they are the main input of information. Large scale data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature have been critiqued and contested along various ethical dimensions. Furthermore, one should consider the ability to curate the database to exclude (or explicitly contain) potential harmful source images. When creating a public API that approach could offer a cheaper way to offer a safe model than retraining a model on a filtered subset of the training data or doing difficult prompt engineering. Conversely, including only harmful content is an easy way to build a toxic model. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer DpwQ Comment: Thank authors for their response. My major concerns are well addressed, but I still get confused by some notations in the paper, such as the $h_{t-1}$ and $\delta_{h_u^{(0)}}$, as I mentioned in the review. * How can $h_{t-1}$ represent a point in $x_{t-1}$? By the definition, it is $t-1$-th feature point of the feature map. * Is $\delta$ dirac delta function? Why do you need to denote the center point of $N_i^s$ (i.e., $h_i$) as $h_{u}^{(0)}$? I suggest the authors carefully proofread their manuscript, especially the notations. I will raise my score by 1. --- Reply to Comment 1.1.1: Title: Response to Reviewer DpwQ Comment: We sincerely thank Reviewer DpwQ for raising score. For $h_i (h_{t-1})$, the subscript is used to denote spatial and time information, respectively, because we want to illustrate the previous point-based diffusion process from different perspectives. For better illustration, we will use $x_{t-1}^i$ to denote the $i$-th feature point at time step $t-1$. $\delta$ denotes the dirac delta function, and the notation of center point can be optionally removed. Following your suggestion, we will carefully proofread our manuscript. Thanks for your kind response.
Summary: This paper proposes an idea of context prediction to boost difussion-based image generation. The core idea is that in each step of diffusion, after the denoised point is generated, neighborhood context prediction is performed. In particular, to maintain the spatial orders of the neighborhood, a permutation invariant loss is used for optimization by replacing the context prediction with neighborhood distribution prediction. Performance improvement against standard diffusion models were presented. in experiments. Strengths: 1. The idea is very interesting and sound. From an image denoising point of view, neighborhood info is commonly used, so it's a natural extension of diffusion-denoising models. 2. Performance improvement showed in experiments are promising. Weaknesses: The proposed approach probably takes longer to train. Can you discuss from that perspective? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. By introducing context, is any sign of "blurriness" introduced? 2. What are the typical cases that do worse compared to standard diffusion? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Similar with text2image papers. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer qzoS for the positive review and valuable feedback. We are glad that the reviewer found that the idea is very interesting and sound, and performance improvement showed in experiments is promising. Please see below for our responses to your comments.* **Q1: The proposed approach probably takes longer to train. Can you discuss from that perspective?** A1: Thanks for your interesting question, we here discuss this problem from two training scenarios: 1.Training ConPreDiff from scratch We introduce the "context prediction" term as in Eq.1. Actually the proposed "context prediction" term plays a additional role in maximizing ELBO and will accelerate the convergence (i.e. less training steps) of diffusion models. We here provide the formula derivation: At time t: Loss of DDPM ( after reparamterization and scaling): $||x_0-\hat{x}_0(x_t,t)||_2^2$ Loss of ConPreDiff: $\sum\_{i = 1}^{x*y} [\mathcal{M}\_p(x\_0^{i} ,\hat{x}\_0^{i})+ \mathcal{M}\_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})]$ We let $\mathcal{M}_p,\mathcal{M}_n$ be square loss, $\mathcal{M}_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})=\sum\_{j\in \mathcal{N}\_i}(x\_0^{i,j}-\hat{x}\_0^{i,j})^2$, where $x_0^{i,j}$ is the j-th neighbor in the context of $x_0^i$ and $\hat{x}_0^{i,j}$ is the prediction of $x_0^{i,j}$ from a denoising neural network. Following the notation in the main paper, we have $\hat{x}_0^{i,j} = \psi_n(\psi_p(x_t,t)(i))(j)$, where $\psi_p(x_t,t)(i)$ is the prediction of $x_0^i$ and $\psi_n$ is the neighborhood decoder. Compactly, we can write the denoising network as: $$\Psi(x_t,t)(i,j) =\left\\{ \begin{array}{l}\psi_n(\psi_p(x_t,t)(i))(j), & j \in \mathcal{N}\_i, \\\\ \psi_p(x_t,t)(i), & j=i \end{array} \right.$$ We can show that the DDPM loss is upper bounded by ConPreDiff loss, by reparameterizing $\hat{x}_0(x_t,t)$. Specifically, for each unit i in the feature map, we predict the unit i and its neighbors using equation (1), and then we use the mean of predicted value in the neighborhood as the final prediction: $$\hat{x}\_0(x_t,t)(i) = 1/(|\mathcal{N}\_i|+1)*\sum\_{j \in \mathcal{N}\_{i}\ \cup\ \\{i\\}}\Psi(x\_t,t)(i,j)$$ Now we can show the connection between DDPM loss with the mean prediction and ConPreDiff loss: $$\begin{array}{rl}||x_0-\hat{x}\_0(x_t,t)||_2^2 & = \sum_i (x_0^i-\hat{x}\_0(x_t,t)(i))^2,\\\\ &=\sum_i (x_0^i-\sum\_{j \in \mathcal{N}_i\cup\{i\}}\Psi(x_t,t)(i,j)/(|\mathcal{N}_i|+1))^2,\\\\ &=\sum_i(\sum\_{j \in \mathcal{N}_i\cup\{i\}}(\Psi(x_t,t)(i,j)-x_0^i))^2/(|\mathcal{N}_i|+1)^2, \\\\ (Cauchy\ inequality) & \leq \sum_i \sum\_{j \in \mathcal{N}\_i\cup\{i\}} (\Psi(x_t,t)(i,j)-x_0^i)^2/(|\mathcal{N}_i|+1), \\\\ &=1/(|\mathcal{N}\_i|+1)\sum_i [(x_0^i- \psi_p(x_t,t)(i))^2+\sum\_{j \in \mathcal{N}\_i} (x_0^{i,j}-\hat{x}\_0^{i,j})^2] \end{array}$$ As a result, the ConPreDiff loss is an upper bound of the negative log likelihood. Thus both terms are cooperative for convergence, and adding "context prediction" term would accelerate the convergence with less training steps. 2.Training ConPreDiff from pre-trained diffusion models Initializing ConPreDiff with any pre-trained diffusion models will need additional fine-tuning time for adjusting base denoising network and training our context prediction head. **Q2: By introducing context, is any sign of "blurriness" introduced?** A2: There is no sign of "blurriness" by introducing context, which can be partially explained by above derivations. We also put more qualitative comparison results **in global response pdf**. From the results, we can easily find that our ConPreDiff can better generate the images semantically consistent with text prompts compared to other methods, and preserve local context well without any sign of "blurriness". **Q3: What are the typical cases that do worse compared to standard diffusion?** A3: Currently, we do not find obvious cases or typical cases that do worse than standard diffusion. However, we will continue to improve our model, explore more potential applications, and address possible limitations for future work. --- Rebuttal Comment 1.1: Comment: Thanks authors for the response. The response addressed my concerns. I suggest to accept this paper.
Rebuttal 1: Rebuttal: ## Global Response We sincerely thank all the reviewers for the thorough reviews and valuable feedback. We are glad to hear that the idea is interesting and well-motivated (all reviewers), the paper is well-written and easy to follow (Reviewer m5Qc, ydZz, and 2mXr), the proposed method is general and can be easily applied to various diffusion models (Reviewer DpwQ, ydZz and 2mXr), and performance improvement showed in experiments are promising (all reviewers). We here summarize and highlight our responses to the reviewers: * We make more visual comparisons to previous methods in the attachment pdf for more directly demonstrating the performance improvement (Reviewer m5Qc, DpwQ and ydZz) and better semantics expression (Reviewer 2mXr) of our proposed ConPreDiff. * We provide additional theoretical derivation from the perspective of maximizing ELBO/likelihood (Reviewer m5Qc) to explain the cooperative relationships between our "context prediction" term and existing point-based denoising diffusion term (DDPMs), which is beneficial for model optimization and convergence (Reviewer qzoS and 2mXr). * We also add some experiments to demonstrate the efficiency and soundness of the proposed "context prediction" (Reviewer m5Qc, DpwQ, and ydZz), and add more discussions about model limitations and societal impact (Reviewer DpwQ and ydZz, ethics reviews). We reply to each reviewer's concerns in detail below their reviews. Please kindly check out them. Thank you and please feel free to ask any further question. Pdf: /pdf/5cf8e9bdbfb074c624e18a8993d50a64b6509128.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is proposing context-aware Diffusion Models. They make the models learn the context information by setting up auxiliary networks to estimate the neighbor distributions from the estimated denoised sample from Diffusion Models. The benefit of this approach is that additional cost from the auxiliary networks are not applied during the sampling. Both quantitative and qualitative experiments are reported. Strengths: - Motivation is agreeable. - Good writing. - Reasonable method for motivation. - Experiments are done well. Weaknesses: Specifics of the weaknesses of this paper are written below as questions and limitations, but I believe most of them can be resolved during the rebuttal. I will increase my rating if my concerns can be resolved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The meaning of "FNN" is not defined. * What is the meaning of “(0)” of the delta function in line 138? * Why not let $FNN_{\mu}$ and $FNN_{\sigma}$ take target index as additional input and skip Eq. 7? Once the target index is specified, the matching algorithm may not be needed anymore. * What is the relationship between $q$ in Eq. 7 and stride $s$ and $K$? * It would be better if performance of directly estimating the neighbors is reported as well since it would be more accurate and simple setup to implement the motivation of this paper (even though it has the memory inefficiency, as mentioned in L132). * Can it be described more specifically why KL or JSD cannot be applied (L155)? It is not straightforward to me why KL cannot be applied to this task. * Are both terms in Eq. 3 used together for finetuning? * What happens if Diffusion Models are trained from scratch with the proposed objectives? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The most important part of the paper would be “context prediction” term in Eq . 3. It’s motivation is understandable, but it is not interpreted in terms of optimizing variational bound of negative log likelihood. Is it just “additional” term? or can it be interpreted as a term playing a certain role in maximizing ELBO or likelihood? - This paper is proposing to 1. estimate the neighborhood distribution (instead of directly estimating the neighborhood) and 2. minimize Wasstertein distance as a core objective. Though the authors made an attempt to justify the design choice, I believe it could be compared as a sort of ablation study, which might strengthen the proposed method. --- I found Fig.6 and the first concern is resolved. - I believe training time needs to be compared together in Fig. 7. - Although performance improvement is shown by quantitaive experiments, it is not straightforward how the "context prediction" makes model performance better (qualitatively). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *We thank Reviewer m5Qc for the positive review and valuable feedback. We are glad that the reviewer found that the motivation is agreeable, the writing is good, the method for motivation is reasonable, and experiments are done well. Please see below for our responses to your comments.* **Q1: Interpret “context prediction” term from the perspective of maximizing ELBO or likelihood.** A1: Connection between our context prediction loss in Eq.3 and ELBO: At time t: Loss of DDPM ( after reparamterization and scaling): $||x_0-\hat{x}_0(x_t,t)||_2^2$ Loss of ConPreDiff: $\sum\_{i = 1}^{x*y} [\mathcal{M}\_p(x\_0^{i} ,\hat{x}\_0^{i})+ \mathcal{M}\_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})]$ We let $\mathcal{M}_p,\mathcal{M}_n$ be square loss, $\mathcal{M}_n(\mathcal{H}\_{\mathcal{N}\_i},\hat{\mathcal{H}}\_{\mathcal{N}\_i})=\sum\_{j\in \mathcal{N}\_i}(x\_0^{i,j}-\hat{x}\_0^{i,j})^2$, where $x_0^{i,j}$ is the j-th neighbor in the context of $x_0^i$ and $\hat{x}_0^{i,j}$ is the prediction of $x_0^{i,j}$ from a denoising neural network. Following the notation in the main paper, we have $\hat{x}_0^{i,j} = \psi_n(\psi_p(x_t,t)(i))(j)$, where $\psi_p(x_t,t)(i)$ is the prediction of $x_0^i$ and $\psi_n$ is the neighborhood decoder. Compactly, we can write the denoising network (equation (1)) as: $$\Psi(x_t,t)(i,j) =\left\\{ \begin{array}{l}\psi_n(\psi_p(x_t,t)(i))(j), & j \in \mathcal{N}\_i, \\\\ \psi_p(x_t,t)(i), & j=i \end{array} \right.$$ We can show that the DDPM loss is upper bounded by ConPreDiff loss, by reparameterizing $\hat{x}_0(x_t,t)$. Specifically, for each unit i in the feature map, we predict the unit i and its neighbors using equation (1), and then we use the mean of predicted value in the neighborhood as the final prediction: $$\hat{x}\_0(x_t,t)(i) = 1/(|\mathcal{N}\_i|+1)*\sum\_{j \in \mathcal{N}\_{i}\ \cup\ \\{i\\}}\Psi(x\_t,t)(i,j)$$ Now we can show the connection between DDPM loss with the mean prediction and ConPreDiff loss: $$\begin{array}{rl}||x_0-\hat{x}\_0(x_t,t)||_2^2 & = \sum_i (x_0^i-\hat{x}\_0(x_t,t)(i))^2,\\\\ &=\sum_i (x_0^i-\sum\_{j \in \mathcal{N}_i\cup\{i\}}\Psi(x_t,t)(i,j)/(|\mathcal{N}_i|+1))^2,\\\\ &=\sum_i(\sum\_{j \in \mathcal{N}_i\cup\{i\}}(\Psi(x_t,t)(i,j)-x_0^i))^2/(|\mathcal{N}_i|+1)^2, \\\\ (Cauchy\ inequality) & \leq \sum_i \sum\_{j \in \mathcal{N}\_i\cup\{i\}} (\Psi(x_t,t)(i,j)-x_0^i)^2/(|\mathcal{N}_i|+1), \\\\ &=1/(|\mathcal{N}\_i|+1)\sum_i [(x_0^i- \psi_p(x_t,t)(i))^2+\sum\_{j \in \mathcal{N}\_i} (x_0^{i,j}-\hat{x}\_0^{i,j})^2] \end{array}$$ in the last equality, we assume that the feature is padded so that each unit i has the same number of neighbors $|\mathcal{N}|$. As a result, the ConPreDiff loss is an upper bound of the negative log likelihood. **Q2: Training time comparison.** A2: We report the approximate time cost (seconds) of these models to train in a single step on the MS-COCO dataset in the table below. The additional context prediction head added to the models accounts for only a small portion of the parameters (0.13M), and the neighborhood distribution decoding of our method also does not incur significant additional training costs. |Training cost (secs/step) |LDM | DALL·E 2| Improved VQ-Diffusion| Imagen| | :-----| :----: | :----: |:----: |:----: | |Original Model | 11.9 | 74.7 | 83.9 |198.3 | | + Context Prediction | 13.1 | 77.5 |87.6 |206.3 | **Q3: Qualitative comparison.** A3: More visualization results can be found in **global response pdf**, ConPreDiff better expresses local contexts and consistent semantics in generated images compared to other methods. **Q4: The meanings of "FNN" and “(0)” in line 138.** A4: We first mention FNN in L145 with feedforward neural networks, which contains two linear-BN-ReLU modules in our experiments. "(0)" is the superscript of $h_u$, which denotes the center point of neighborhoods $N^s_{i}$ (i.e., $h_i$). **Q5: Why not let FNN take target index as additional input and skip Eq. 7?** A5: Our distribution decoding loses original spatial orders of neighborhoods, and thus we use a **permutation invariant loss (Wasserstein distance)** for optimization. However, the Wasserstein distance between decoded neighborhood distribution and ground truth does not have a closed form. Thus we design **Eq.7 as an empirical surrogated loss of Wasserstein distance**. Taking target index as additional input may not well approximate Wasserstein distance. **Q6: What is the relationship between q in Eq.7 and stride s and K?** A6: 1. $K={(2s+1)}^2-1$ 2. $q<K$ **Q7: It is better to report the performance of directly estimating the neighbors as well.** A7: As you found in the Fig.6 of our paper, direct estimation of neighbors slightly improves the generation results but significantly incurs more training costs with larger strides. And our distribution decoding can achieve a better trade-off between FID score and training cost. **Q8: Why KL or JSD cannot be applied (L155)?** A8: Wasserstein distance can effectively measure structural similarity and impose structural constraint between the decoded distribution and the ground truth, via an optimal-transport loss. And Wasserstein distance can measure the similarity **between continuous and discrete distributions**, but KL or JSD can not well characterize the structural similarity between such distributions. **Q9: Are both terms in Eq. 3 used together for finetuning? What happens if Diffusion Models are trained from scratch with the proposed objectives?** A9: Both terms are used together. The convergence of our ConPreDiff would be quicker than the diffusion models that only use point-based reconstruction objective, because both terms play a cooperative role in maximizing ELBO as demonstrated in above responses. --- Rebuttal Comment 1.1: Comment: I keep my initial rating because my major concerns are resolved well. Thank you for the rebuttal.
null
null
null
null
null
null
Critical Initialization of Wide and Deep Neural Networks using Partial Jacobians: General Theory and Applications
Accept (spotlight)
Summary: This paper studies criticality of deep neural networks at initialization. The authors propose a new practical way to diagnose criticality by introducing the partial Jacobian of the network and analyzing the averaged partial Jacobian norm (APJN) and its recurrence relation at large depth. The authors then apply their method to analyze criticality in fully connected networks with LayerNorm and/or residual connections, providing theoretical analysis for infinitely wide networks and a numerical test to select optimal initialization and identifying conditions on network architecture that allow for criticality for any initialization. Strengths: 1. The paper presents theoretical analysis of the APJN in the infinite width limit for various network architectures, and thorough validation by numerics. 2. The paper extensively explored application of the theoretical analysis and numerical test on modern architectures (ResNet and MLPmixer), providing practical insights for initializing neural networks for improved trainability. Weaknesses: 1. As most other works studying critical initialization, the work focuses on networks at Initialization with Gaussian random weights, and there is no learning in the network. While the authors briefly mentioned NTK (line 150), I would like to see a discussion on how their approach may be extended for analysis beyond networks at initialization and perhaps shed light on the learning dynamics of the network. 2. In section 1.2, the authors discussed several related works and how part of their results were previously obtained in a different form in these works. I would recommend that the authors further stress the novelty of their approach/analysis in this section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How is the APJN defined in this paper related to the order parameters used in previous works [Poole et al, Schoenholz et al.] for criticality (specifically the $c^*$ and $\chi_1$ defined in Eq.4-5 in [Schoenholz et al.]). It would be nice if the authors can add a paragraph on how the APJN is related to these order parameters, and how the empirical test they introduced would be superior to evaluating scalings of the order parameters directly (providing a simpler numerical test? Unbounded activation functions?). 2. Line 146: I am a bit confused what are the respective roles of $\chi_K$ and $\chi_J$ in the trainability of the network and how they are related to each other and the conditions on $\chi_1$ and $c^*$ in [Poole et al.], can you clarify in this paragraph? 3. Section 2: There are two limits here, the infinite width and the infinite depth. It seems that it is important for the scope of this paper to take the infinite width limit (successively) first, is that correct? 4. Figure 3: This figure perhaps need a better caption. Are the white dashed lines given by the infinite width analysis? Why does there seem to be a larger discrepancy in networks with LayerNorm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors mentioned in various parts of the paper but did not specifically summarize the limitations, I would recommend adding a paragraph in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. ## Training Dynamics and Relation to NTK The training regime that we target enjoys feature learning and large learning rates, and hence, is far from the NTK regime. While it is possible to connect APJN with NTK and analyze linear dynamics, that is not the purpose of our work. (We can readily add the quantitative relation between APJN and NTK in the Appendix.) Moreover, dynamics beyond the NTK regime for non-linear networks with $L>2$ is a famously unsolved problem. ## Novelty The reviewer has correctly identified the key advantages of our work. We elaborate them here: - All previous diagnostics of criticality require either closed-form solutions or an integral formula followed by numerical estimation [1,2]. This limits their usage to general/unseen architectures. Empirical APJN circumvents these limitations, and is useful as a numerical test for *general* real-world architectures. \ In addition to the architectures presented in the text, one can analyze inhomogeneous networks by dividing them into "blocks" of layers and considering block-to-block Jacobians. - $\chi_{\mathcal J}$ can be numerically calculated with a single layer-to-layer gradient, which is readily obtained using Autodiff. Consequently, our method is simple and computationally cheap. - The characterization of everywhere-critical regime using normalization layers and residual connections. This finding, to our knowledge, is new. In Section 1.2 of the final version, we will emphasize our contributions and the novelty of our approach. ## The role of $\chi_{\mathcal J}, \chi_{\mathcal K}$ and relation to $c^*, \chi_1$: $\chi_{\mathcal J}^l$ is defined as the layer-to-layer APJN $\mathcal J^{l,l+1}$. For large enough $l$ it converges to the fixed point $\chi_{\mathcal J}^*$. The constraint $\chi_{\mathcal J} = 1$ regulates the backward pass (gradient propagation) in the network. For Fully connected networks (FCNs), $\chi_\mathcal J^*$ can be viewed as a generalization of $\chi_1$ defined in the works [1,2]. The key advantages of our method over theirs are: - Unlike $\chi_1$, $\chi_{\mathcal J}^*$ can be applied to both bounded and unbounded activation functions. (Note that $c^*$ in the aforementioned works fails to capture the critical behavior for unbounded activations such as ReLU -- $c^*=1$ in both phases.) We point this out in line 148. - As mentioned in the "Novelty" section above, empirical $\chi_{\mathcal J}$ is computationally cheap and has general applicability compared to the aforementioned works. While $\chi_{\mathcal J}=1$ regulates the backward pass of the network (i.e. gradient updates), $\chi_{\mathcal K}=1$ regulates the forward pass of the networks (i.e. preactivation norms). In practice, we find that the condition on $\chi_{\mathcal J}$ is more important for trainability. We will state these points more clearly in the final version. ## Order of limits in Section 2 Yes, for the scope of our work, it is important to take the width to infinity first. More precisely, we take the successive layer widths to infinity (sequential limit in Theorem 1.3). In practice, this translates to $L/N \sim o(1)$. ## Figure 3 Caption and Clarifications We had to truncate the image captions due to space constraints. In the final version, we will add the following caption for Figure 3: "Figure 3: Trainability (Training accuracy) of deep MLP on FashionMNIST $(N_l=500, L=50)$. The dotted white line denotes the (analytical) critical lines. Note that the combination of LayerNorm and residual connections significantly improve trainability, with the $\mu=1$ case being everywhere-trainable." Yes, the white dashed lines are critical lines from theory. We are unsure of what the reviewer means by "larger discrepancy in networks with LayerNorm". We answer this question with two possible interpretations. We urge the reviewer to clarify their question so we can better answer it. - Networks with LayerNorm seem trainable even far from the critical line (dotted white line): LayerNorm, especially in conjunction with residual connections drastically improves the correlation length. For real, finite-depth networks, correlation length sets the scale for trainable depth. This results in finite-depth networks training well in a finite region around criticality. - Critical lines have seemingly different slopes in Figures 2 and 3: This is simply the result of the span of the Y axes of the plots. They are identical otherwise. [1] B. Poole, et al., Exponential expressivity in deep neural networks through transient chaos, 2016. [2] S. S. Schoenholz, Deep Information Propagation, 2016.
Summary: The paper addresses the theoretical treatment of deep neural networks and introduces a novel practical approach to identify criticality within these networks. The authors work in the setting where the number of parameters per layer approaches infinity, enabling the formulation of quantitatively predictive descriptions ( establish criteria for hyperparameter selection). These criteria are based on the notion of criticality.¨ To identify criticality, the paper introduces partial Jacobians, which represent derivatives of preactivations in layer l with respect to preactivations in layer l0 (where l0 ≤ l). Recurrence relations for the norms of these partial Jacobians are derived and utilized to analyze criticality in deep fully connected neural networks featuring LayerNorm and/or residual connections. The authors devise a straightforward and cost-effective numerical test to determine the optimal initialization for various types of deep neural networks, including fully connected, convolutional, and normalization layers. They present quantitative evidence demonstrating that arranging LayerNorm (applied to preactivations) and residual connections appropriately leads to an architecture that exhibits criticality regardless of the initialization. Finally, the paper applies these methods to investigate the ResNet and MLP-Mixer architectures, revealing the presence of an everywhere-critical regime within these modern models. Strengths: This research represents a significant advancement in the field. A notable contribution of the paper is the introduction of partial Jacobians and their averaged norms as powerful tools for analyzing gradient propagation in deep neural networks at initialization. The paper particularly investigates the implications of LayerNorm and residual connections, shedding light on their impact on trainability. It is encouraging that the theoretical prediction derived from this Framework match previously empirically observed or theoretically proven findings. Additionally, the paper strengthens its contributions by testing the findings on more realistic datasets, thereby enhancing the robustness and applicability of the research. The paper is highly comprehensive and lucid, greatly facilitating understanding. It effectively summarizes the contributions, allowing readers to grasp the main points without constant reference to the appendix. Furthermore, the authors have shared the relevant code with their submission, ensuring transparency and enabling the replication of results. The inclusion of detailed step-by-step derivations significantly aids comprehension and is highly appreciated by readers. Weaknesses: These suggestions are intended to improve the overall clarity and accessibility of the research. To enhance the organization of the paper, it would be beneficial to include dedicated sections for limitations, future work and contributions. Additionally, the current title of the section labeled "results" may be misleading and should be reconsidered. Furthermore, FashonMNIST is not mentioned in the main text. The limitations outlined below highlight some of the weaknesses of the paper, despite the fact that they represent the current best efforts in the field. Although it may seem like a minor detail, I believe it holds significant importance. For the sake of accessibility, completeness, and readability, I would suggest including relevant lemmas or theorems used from other sources in the appendix to provide a more comprehensive understanding of the research. Furthermore, It would be helpful if you could extend slightly the captions in the figures for the camera-ready version for the same reason. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Dear authors, I have a few questions regarding your work that would greatly aid my comprehension: * Can you provide a rationale for selecting the architectures used in your study? Are there any other potential architectures that could have been considered within the framework? * In addition to LayerNorm and residual connection what other technics used in the field could have been explored in your framework ? Thank you in advance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: To enhance the paper, it would be valuable to provide a clear outline of the limitations of the work. Some potential limitations to consider include: The paper focuses on testing a specific set of realistic architectures, which means there is room for exploring other architectures beyond those examined in the study. Although the obtained results are encouraging, extending the analysis to different architectures would provide a more comprehensive understanding of the findings. Additionally, the paper focuses on investigating the effects of LayerNorm and residual connections. While these factors are certainly important, exploring the impact of other components or techniques apart from LayerNorm and recurrent layers could broaden the scope of the research. Examining different elements in the network architecture could reveal additional insights into their contributions and interactions. Another important limitation of the paper is that it specifically focuses on the infinite width limit of deep neural networks. While analyzing neural networks in the infinite width limit can provide valuable theoretical insights, it may not fully capture the behavior and characteristics of networks with finite widths. To address this limitation, it would be valuable to extend the analysis to include investigations and experiments on networks with various finite widths. This would provide a more realistic and comprehensive understanding of the behavior, performance, and criticality of deep neural networks in practical settings. Including a note on the computational complexity required to conduct these experiments would be beneficial. Acknowledging the computational demands of the experiments would provide a better understanding of the resources needed for replication or further research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback and suggestions for improvement. ## Presentation and Clarity We welcome the suggestions to improve the presentation of our paper: - We will include a discussion on limitations and future work in Section 7 (Conclusion). - We will change the Section title to "Main Result", as it states "everywhere-criticality", a key contribution of our work. - We mention FashionMNIST in the caption for Figure 3. - We will extend the figure captions (especially Figure 3) in the final version of the paper. (See Global Response for further discussion.) ## Answers to Questions: 1. The applicability of *numerical* Averaged Patial Jacobian Norm (APJN) is quite general. The choice of modern architectures like ResNet and MLP-Mixer was made to demonstrate its utility in SOTA models. It is possible to extend this method to models with Attention layers (eg. Transformers). We find that APJN correctly predicts the everywhere-criticality of pre-LN Transformer. For further details, we refer the reviewer to the General Response and the accompanying one-page pdf. 2. As we mention in the paper, the analysis also extends to other normalization techniques such as BatchNorm and GroupNorm. Dropout can also be included. In general, one can divide the network into "blocks" of layers and study APJN wrt outputs of these blocks. Since these blocks can contain arbitrary layers/techniques within them, any/all models with this "feedforward" structure fall within the domain of applicability of APJN. ## Infinite Width Limit We would like to emphasize that although our theoretical analyses are performed in infinite width limit, all training (including MLP, ResNet and MLP-Mixer) are performed on real, finite-width networks. For example, all MLP experiments were performed with N=500, L=50. Formally, the notion of criticality remains crisp for networks with $L/N \sim o(1)$. $L/N$ corrections to the infinite width analysis have been calculated in other works [1][2]. Finite $L/N$ creates fluctuations in partial Jacobian-norm around its mean value. Nevertheless, for practical purposes, the notion of criticality remains largely unchanged. This can be readily seen by the agreement of our finite-width experiments (phase diagrams) and infinite-width predictions (dashed lines) in Figure 2. We direct the reviewer to Global Response for further remarks. ## Extension to other architectures and techniques - As mentioned above, empirical application of APJN is quite general. On the theoretical side, a comprehensive extension of our methods to Transformers would be very useful; especially due to the ubiquity of Transformers across various tasks/modalities. - We study normalization techniques like LayerNorm as well as residual connections because of their prevalence in modern architectures. We encourage further studies of our methods on domain-specific techniques. ## Computational Resources Appendix A outlines the computational resources utilized in performing each of our experiments. In general, the computation of empirical APJN takes $<1\\%$ of the resources required for training. We are happy to provide further details. [1] S. Yaida, Non-Gaussian processes and neural networks at finite widths, 2020. [2] B. Hanin, Random Fully Connected Neural Networks as Perturbatively Solvable Hierarchies, 2022. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for your careful explanation and detailed rebuttal. I increased my score as a reflection of improvement in clarity and presentation as well as the general understanding of the contribution. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and the careful consideration of our manuscript. We are glad to have addressed the reviewer's questions and concerns satisfactorily; and are grateful for the score revision. We are happy to answer any further questions.
Summary: The paper studies the effect of the expected value of the Jacobian norm of a particular layer with respect to a previous layer as the depth of a neural network (NNs) increases. The study is done under the assumption of infinite width NNs, and with the goal of assessing sensitivity to initialisation hyperparameters (the standard deviation of the normal initialiser of the weight and biases of the network) depending on architecture and as depth increases. The importance of understanding this setting comes from the connection with exploding and vanishing gradients that the network is likely to exhibit in training, unless the initialisation (in expectation) is in the critical region. The conditions for criticality across architectures are studied in this work. The settings studied include feed-forward networks, residual networks with and without layer-norm applied to the pre-activation, as well as group norm and briefly Batch Norm in conjunction with residual networks. Reasons for rating: While the contribution of the paper is worthwhile, the clarity could be greatly improved. At the moment, the paper resembles more a series of facts rather than a clear logical deduction. I worry that this will limit the value it can bring to the ML community. Strengths: Strengths: * The paper provides a theoretical assessment of why, in the infinite width limit, certain neural networks can achieve better trainability. In particular, the authors study the effect of residual connections and layer norm, both which have become a staple of deep learning. This type of study can help explain why certain architectures are less sensitive to initialisation compared to others. * Code reproducing the figures is provided in the supplementary material as notebooks (and a clear readme.txt detailing the results). * Proofs in the attached SM are generally readable and can be followed. Weaknesses: Weaknesses: * The clarity of the paper can be drastically improved. * In the paper, the same recipe of proof is applied repeatedly in multiple settings. I think the paper would benefit from clarifying that recipe in the main text, and working through the logical steps one by one in one example. At the moment, there is no clear explanation of the logical flow of what is occurring. My understanding of that recipe is: In the setting studied (for that specific architecture), in the infinite width limit find a recurrence relationship between the NNGP kernels at layer l+1 and l when evaluated at the same datapoint. Use that recurrence relationship, together with the recurrence relationship for APJN to find the conditions of criticality for initialisations. I urge the authors to clarify this recipe (in more detail) in the main manuscript. as well as work through an example. * There are many sentences for which it is unclear where they come from, either from a proof in the Appendix or from a previous work or the authors consider it trivial. Such an example is Eq 4, which potentially follows from Thm 1.3, but the order of the two is unclear. Similarly, the proof of Thm 1.3 in the Appendix is not well delimited or clarified (given that it is the main result). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Suggestions for improvement: * Please see above in terms of weaknesses. * For Thm 2.4 (and similarly in the Appendix proofs), why not write the recurrence relationship as K^{(l+1)}(x, x) = \sigma_w^2 K^{(l)}(x, x) + \sigma_b^2 ? * There are many sentences in the paper that require additional citations, such as line 213. * Figure 1b shows a lot of noise as the number of layers increases, can the authors comment on that? * The authors should provide additional details on how the figures are obtained, at the moment these are scant. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I believe that the biggest limitation of the paper as is now is its presentation, which can be drastically improved. This applies both to the writing and the details around the figures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and detailed suggestions on the presentation. ## Clarity We believe that the logical flow of the current manuscript is similar to what the reviewer has proposed. We state the outline of our paper here -- we will add a version of this in the Introduction of the final version: In section 2 we introduce the recurrence relation for NNGP kernel and Jacobians with fully connected networks. Based on these quantities we define criticality. In Section 3 and Section 4, we add LayerNorm and residual connections, describing their effects on criticality. Then in Section 5, we combine LayerNorm and residual connections to show the emergence of everywhere-criticaly. Finally, we use our methods on real-world architectures like ResNet and MLP-Mixer in Section 6. ## Proof of Eq 4 We use Section 1.1 to summarize our theoretical results. Eq. 4 can be viewed as the definition of correlation length $\xi$. Theorem 1.3 then utilizes this definition and states the effect of LayerNorm and residual connections on correlation length. An additional explanation for Eq. 4 is provided in Section 2.1. In the final version, we will add a reference to Section 2.1 as a motivation for Eq 4. ## Proof of Thm 1.3 Theorem 1.3 directly follows from Eq. 77 (Appendix F), upon taking infinite width and large depth limits. (and taking the logarithm, as mentioned in the text below Eq 77). We will flesh out these steps more explicitly in the final version. ## Form of recursion equations: We offer multiple responses to 2 possible interpretations of the reviewer's question: (i) If the reviewer is suggesting the specific form: The form that the reviewer mentioned, $K^{(l+1)}(x, x) = \sigma_w^2 K^{(l)}(x, x) + \sigma_b^2$, only holds for linear networks. For other cases $K^{(l+1)}(x, x)$ is a non-linear map of $K^{(l)}(x, x)$, which depends on the activation functions. (ii) If the reviewer means to ask why we keep the summation and the factors of $1 / N_l$ in the equations: It is possible to introduce some extra symbols to simplify the equations notationally. However, we believe that such additional notation will come at the price of clarity and lucidity. The current way of writing formulae makes the underlying calculations apparent. This should make the equations accessible to a broader audience. ## Additional references: We thank the reviewer for pointing out places where more references are potentially warranted. We will scan the text and add the relevant references. ## Noise in Figure 1(b) The apparent fluctuations in Figure 1(b) come from two factors: (i) The Y-scale of Figure 1(b) is more zoomed in compared to other plots, making the fluctuations appear strong. We chose this scale because the underlying curve of $\mathcal J^{0, l}$ is a constant -- without increase or decrease in the trend, fluctuation offers the only natural choice for the scale of the y-axis. (ii) The existence of the fluctuations is a result of the depth-to-width ratio ($L/N$). $L/N = 250/1000 = 0.25$ in this case. This results in strong fluctuations even after averaging. ## Details of figures The experimental details of all the figures, along with computational resources for reproducing them, are fleshed out in Appendix A (due to space limitations). In the final version, we will expand the figure-captions by adding further experimental details. --- Rebuttal Comment 1.1: Title: Rebuttal update Comment: I thank the reviewers for the update (and for sharing the empirical transformer results, which while not requested by myself, are nonetheless interesting). I have decided to keep my score. Other reviewers have also highlighted the need for an increase in clarity of the paper, and I do not feel like the above changes suggested by the authors above are meaningful enough to greatly increase clarity. --- Reply to Comment 1.1.1: Title: Concerns about clarity Comment: We thank the reviewer for their response and careful consideration of our manuscript. To further address the reviewer's concerns about clarity, we will add a summary of the recipe for the derivation of recurrence relations at the beginning of Section 2. To that end, we will include the following paragraph above Definition 2.1: ``Here we derive the infinite width recurrence relations for the APJN and the NNGP kernel. We use Lemma 2.2 to derive NNGP kernel recursion, and leverage that to get the recursion for APJN. Fixed point analyses of these relations help us define critical line and point. We discuss the vanilla MLP with no LayerNorm and $\mu=0$ in this section. Results with LayerNorm and/or residual connections as well as modern architectures are presented in the following sections. (We refer the reader to Appendices C,E,F for proofs and detailed application of the above recipe. Appendices H,I,J contain the results for various activation functions.)'' We will also modify Appendices C,E,F to make the flow of proofs more apparent. Due to space constraints, we reserve the detailed proofs and various example settings (LayerNorm, residual connections etc.) to Appendices C,E,F; and example activation functions to Appendices H,I,J. We hope to have satisfactorily addressed the reviewer's concerns about the presentation. We welcome any further questions and suggestions.
Summary: The paper presents a theoretical framework for understanding the trainability of deep neural networks with LayerNorm and residual connections. The authors derive analytical expressions for the neural network Gaussian process (NNGP) kernel and the partial Jacobian norm (PJN) for a wide range of activation functions. They show that the combination of LayerNorm and residual connections leads to an everywhere-critical regime, where the network can be trained effectively irrespective of the initialization. The authors also provide insights into the role of the hyperparameters and the activation function in determining the trainability of the network. The paper's contributions include a theoretical understanding of the trainability of deep neural networks with LayerNorm and residual connections and insights into the design of effective architectures. Strengths: The main strengths of the paper are: 1. Introduces partial Jacobians and their averaged norms as tools to analyze the propagation of gradients through deep neural networks at initialization. 2. Presents a very cheap and simple empirical test for criticality using APJN evaluated close to the output. 3. Shows that criticality formulated in terms of partial Jacobians is equivalent to criticality studied previously in the literature. 4. Investigates homogeneous architectures that include fully-connected layers, normalization layers, and residual connections, and shows that the combination of LayerNorm and residual connections can drastically increase correlation length leading to improved trainability. 5. Considers examples of modern architectures, ResNet and MLP-Mixer, and shows that they are critical everywhere at µ = 1 (i.e., with residual connections) due to the interaction between LayerNorm and residual connections. 6. Empirically demonstrates that deep MLP-Mixer with µ = 1 trains well for various initializations. Weaknesses: Some potential limitations of the paper could include: 1. The analysis is limited to homogeneous architectures with fully-connected layers, normalization layers, and residual connections. The results may not generalize to other types of architectures or layers (e.g., attention layers). 2. The paper focuses on the infinite width limit, which may not be directly applicable to finite-width networks commonly used in practice. 3. The empirical test for criticality based on the averaged partial Jacobian norm may not be sufficient to fully capture the behavior of deep neural networks (in terms of accuracy for instance). 4. The paper does not provide a comprehensive comparison with other methods for improving the trainability of deep neural networks, such as weight initialization schemes or adaptive optimization algorithms. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See the weaknesses part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. ## Infinite width limit Formally, our theoretical results become crisp when $L/N \sim o(1)$ (width $N$, depth $L$). In practice, our conclusions apply whenever this ratio is small. For instance, all the phase diagrams in figures 2,4,5 are obtained on real, finite width, deep networks (MLP: N=500, L=50; ResNet and MLP-Mixer: N=512, L=32). One can readily see the agreement between finite-width phase diagrams and infinite-width predictions (dashed lines). Moreover, in practice, very deep architectures typically use residual connections. Residual connections make networks effectively shallow, leading to an effectively small ratio $L/N$. We direct the reviewer to "Global Response" for more details. ## Inhomogeneous architectures The utility of Averaged Partial Jacobian Norm (APJN) can be readily generalized to inhomogeneous architectures. These architectures can contain differently initialized fully-connected layers, convolutional layers, normalization layers, residual connections, and even attention layers. In these case, due to inhomogeneity, the fixed point analysis and scaling discussion no longer applies. However, it is possible to impose criticality with a stricter requirement: $\mathcal J^{l,l+1}=1$ for all layers. This requirement fixes the hyperparameters layer-by-layer, which can be implemented using our code. The additional coputational cost required in this case is no more than other methods for critical initializations. Though more computational power is required in this case, we view this as a price one must pay no matter which methods one uses. ## Attention We present empirical phase diagrams for the Vision Transformer (ViT) with various settings, in the accompanying on-page pdf. Our empirical methods correctly measure the gradient scale, including the fact that pre-LN transformers are better behaved than post-LN transformers. On the theoretical side, the analysis is more involved. Attention layers can be viewed as fully-connected layers with data-dependent weights. It is known that the commonly used scalar-dot product attention layer converges to a Gaussian process in the limit where the number of heads and the embedding dimension go to infinity [1]. For most real-word models $L / n_{heads}$ is $O(1)$, which makes infinite width analysis unfavorable even with residual connections. Furthermore, softmax function in the attention prevents getting closed-form results. Despite the lack of proper theoretical understanding, our tools can remarkably analyse criticality for such networks (See attached one-page PDF). ## Accuracy The objective of our methods is not to predict the best performance based on initialization. In general, it is hard to isolate the effect of initialization from factors like dataset, optimization, regularization etc. Our goal is to provide a universal framework for initializing a network that facilitates good training performance. Such a framework is invaluable, for instance, in Neural Architecture Search (NAS); where it is important to distinguish the effect of architecture from initialization. Our experiments show that APJN manages to achieve this in a diverse set of cases. ## Comparison We will add more related works and explain the difference and connections between known methods and ours. Here we just want to summarize the key point briefly: - Popular initialization schemes, for example, He initialization, is a special case of our method. These schemes apply to specific cases (e.g. He init is critical for ReLU networks). - References to prior methods that utilize the infinite width limit to fix initializations (e.g. $\sigma_w^2$) are included in lines 146-150. - Methods focusing on scaling, including Fix-up[2], ReZero[3], and other works, those methods can also be summarized as initialize with $\mathcal J^{l, l+1} = 1$. - Good initialization is essential even with adaptive methods and should be used in conjunction. For Adam, at early training time, the gradient update is bounded by the learning rate $\eta$ [4][5]. At layer $l$, the difference between the feature-updates of two adjacent input vectors $x$ and $x+\epsilon$ is upper bounded by $O(\eta \cdot \epsilon \cdot \mathcal J^{0, l})$ (up to some architecture/loss dependent constant). In this case, $\mathcal J^{0, l}$ regulates the relative updates for different features, preventing them from being too large or too small, facilitating better training. [1] J. Hron, et al., Infinite attention: NNGP and NTK for deep attention networks, 2020. [2] H. Zhang, Y. N. Dauphin, T. Ma, Fixup Initialization: Residual Learning Without Normalization, 2020. [3] T. Bachlechner, et al., ReZero is All You Need: Fast Convergence at Large Depth, 2020. [4] D. P. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, 2017. [5] J, Ma, D. Yarats, On the Adequacy of Untuned Warmup for Adaptive Optimization, 2021.
Rebuttal 1: Rebuttal: # Global Response to All Reviewers ## Infinite width limit For analytical results, we first take the infinite width limit and then a large depth limit. Stated differently, it means that the depth-to-width ratio $L/N \sim o(1)$. In practice, this assumption holds as long as $L/N$ is small. Moreover, residual connections make networks effectively shallow, significantly decreasing the effective $L/N$ ratio, and allowing our conclusions to be applied at even larger depths. All of our phase diagrams and training results are obtained from real networks with reasonable width and depth (MLP: N=500, L=50; ResNet and MLP-Mixer: N=512, L=32). This should convince the reviewers that although our theory is formulated in the infinite width limit, our results concerning criticality describe real-world models; albeit with an error (fluctuation) of up to $O(L/N)$. ## Transformer Architectures We have added phase diagrams for Vision Transformers (ViTs) in the global response PDF file. As predicted, $\mu=1$ with pre-LN is everywhere critical. Removing LayerNorm or using smaller $\mu$ leads to non-critical initializations. We will add these results to the Appendix of the final version. On the theoretical side, the analysis is more involved. It is known that the commonly used scalar-dot product attention layer converges to a Gaussian process in the limit where the number of heads and the embedding dimensions go to infinity [1]. However, real-world models are far from this limit; wherein $L/n_{heads}$ is $O(1)$ (even with residual connections). This makes infinite-width analysis unfavorable. Moreover, Softmax function in attention prevents closed-form results for most calculations. Despite these limitations, people have found a particular width scaling of initialization and learning rate for pre-LN transformers [2]. Despite the lack of comprehensive theoretical understanding, our tools can extract meaningful information about the magnitude of gradients for attention layers (See attached one-page pdf). It identifies criticality and correctly predicts that pre-LN transformers are better-behaved than post-LN transformers [3]. ## Captions and Clarity Due to space limitations, we had oversimplified some of the Figure-captions. We will write more detailed captions in the final version. Details of experiments and computational resources are fleshed out in Appendix A. [1] J. Hron, et al., Infinite attention: NNGP and NTK for deep attention networks, 2020. [2] E. Dinan, et al., Effective Theory of Transformers at Initialization, 2023. [3] R. Xiong, et al., On Layer Normalization in the Transformer Architecture, 2020. Pdf: /pdf/336ec9878a37a0ae56efca4f68a445e40fa2a2eb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Small batch deep reinforcement learning
Accept (poster)
Summary: The paper demonstrates the advantages of employing smaller batch sizes in value-based RL algorithms. It reveals that utilizing a smaller batch size can moderately or significantly enhance performance across several value-based RL algorithms, with the exception of DQN, where there are no improvements. Nonetheless, the paper reveals that incorporating a deeper network or employing n-step returns can restore the benefits of smaller batch sizes in the case of DQN. To examine the underlying reasons behind the benefits of smaller batch sizes, the paper conducts a comprehensive set of experiments. These experiments investigate various factors, such as increased update variance, reduced representation norm, and improved network expressivity, shedding light on how smaller batch sizes can help performance. Strengths: - The paper presents a comprehensive and robust series of experiments, providing evidence for the effectiveness of smaller batch sizes in various value-based RL algorithms. - The analysis conducted in the paper offers clear insights into the reasons why smaller batch sizes yield improvements in performance. - The paper also delves into an in-depth investigation to understand why smaller batch sizes do not yield benefits in the case of DQN, providing valuable insights and explanations for this observed phenomenon. Weaknesses: While the result may not be technically novel, I agree that a comprehensive study on the topic of the benefits of smaller batch sizes is still highly valuable to the research community. Although some researchers are aware that smaller batch sizes can be beneficial, as seen in certain implementations of existing algorithms, such as IMPALA (e.g., https://github.com/facebookresearch/torchbeast uses a default batch size of 8), there is a lack of thorough analysis on this matter. Thus, the paper fills this gap and provides valuable insights and understanding of the issue. I appreciate the significance and value that the paper brings to the field. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I suggest explaining the intuition on how the n-step return interplays with the batch size. My guess is the n-step return has a larger variance than a single-step return, and a smaller batch size can somehow further magnify this variance. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I don't see any clear limitations in this study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and appreciate the positive evaluation. Indeed, our intuition matches yours: the increase in variance from both fronts (smaller batch size and multi-step updates) seems to have a positive effect on performance. It is possible that they are adding different types of variance, which have a complementary effect on learning. This finding is further confirmed by Figure 2 in the PDF included with the general rebuttal at the top, which evaluates on classic control environments and shows that the small batch effect is also more pronounced when combined with multi-step updates. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I suggest expanding more on the different types of variance in the revised paper and how the two variance interplay. My understanding is that the gradient update is the average of m iid random variables, and n-step return increases the variance by increasing individual variance, while a smaller batch size increases the variance by reducing m. This may explain how n-step return is needed for a small batch size to show effects, as the two sources of variance are multiplied together. I have no additional questions and will stand by my initial score. --- Reply to Comment 1.1.1: Title: Expanding discussion on different types of variance Comment: Thank you for your suggestion. We agree with your interpretation of the two types of variance and agree that it would be worthwhile to expand on this in our revised paper!
Summary: This work studies the effect of reducing the batch size in value-based deep RL algorithms. Surprisingly, the authors find that smaller batch sizes generally improve learning performance and speed up training in terms of wall-clock time. Towards understand this "small batch effect", they empirically investigate how batch size relates to e.g. multistep learning, variance of gradient updates, network capacity, network plasticity, etc. Strengths: 1. Overall, the writing is extremely clear and well-organized. While reading, I found myself asking questions that the authors then answered in later section of the paper (e.g. Lines 125-129 prompted questions on how batch size relates to plasticity and network capacity) 2. The empirical evaluation is thorough; the authors consider a wide range of Atari tasks and investigate how batch size relates to a variety of learning factors. 3. The relationships uncovered in this work relate to many RL research areas (e.g. exploration, continual learning), and I believe they will spur interesting future research. Weaknesses: 1. The study only considers visual tasks with discrete actions. Does a small batch size improve data efficiency if you use the non-visual RAM observations in a few representative tasks? Do the same trends observed in Fig. 11 still hold? Since the paper is scoped to focused on value-based algorithms, I believe it is sufficient to state the discrete action limitation in the conclusion. 2. Since a smaller batch size seems to come with a variety of benefits (e.g. smaller gradient norms), it isn't clear to me if the observed benefits in Fig. 6 are due to improved exploration via higher variance gradient updates. Since it is likely not feasible to isolate exploration, can the authors instead clarify how these figures show improved exploration? **Minor comments:** 2. Lines 59-60: "r" -> "r_t" 3. I believe line 265 should say "**decreasing** the batch size should increase variance" 3. Lines 266-268: "That it's effect..." This sentence is difficult to read and would benefit from rephrasing. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Fig. 1: Can the authors provide any intuition on why 22/60 tasks performed worse with a smaller batch size with QR-DQN? 2. In Fig. 8, a larger batch size improves performance only for learning rate = 5e-06. Can the authors provide explanation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful review of our paper, and suggestions for improvement. We address your concerns and questions below, referencing the PDF attached to the general response above. We will also correct all the minor issues pointed out in our submission. ## The study only considers visual tasks with discrete actions. Although our work focused on value-based algorithms, we agree it is worth investigating whether the effect is present in non-visual and continuous control tasks. We ran some experiments with DQN and Rainbow on 2 classic control environments (where inputs are state vectors) (Figure 2 in rebuttal PDF), as well as MPO [1] on DM-control environments (Figure 3 in rebuttal PDF). In both cases, we see a general trend towards improved performance when using smaller batches. In the classic control case we also see confirmation of the findings presented in section 4.1 of our submission: the small-batch effect is more pronounced with multi-step learning. ## Where is improved exploration coming from? Many methods have been proposed to address the exploitation-exploration dilemma, and some techniques emphasize exploration by adding noise directly to the parameter space of agents Fortunato et al., 2018 [2], Hao et al., 2023 [3], Plappert et al., 2017 [4], Gupta et al., 2018 [5], which inherently adds variance to the learning process. Noise perturbation is another approach that has been taken to induce exploration [6]. Like these works, our analyses show that increasing variance by reducing the batch size may result in similar beneficial exploratory effects, as the mentioned works suggest. As the reviewer rightly points out, it is difficult to isolate the direct impact on exploration; however, the improved performance observed on all the hard exploration games in Atari, as well as in MountainCar (which is considered to be a hard exploration classic control environment, shown in Figure 2 in the rebuttal PDF) suggests that improved exploration may be an advantageous consequence of the variance induced by reduced batch size. As we discussed in section 5.1, we believe further work exploring the impact of variance injection in deep RL algorithms is necessary, and will add these points to our discussion. ## Why 22/60 of the games performed worse? It is very rare for an agent/algorithm to outperform the baselines on all games considered. For example, C51 [7] improved on 44/57, Rainbow [8] on 26 out of 57 games, QR-DQN [9] on 52/57, and Munchausen-IQN [10] on 40/60 games. Reporting aggregate performance, as is the norm with this benchmark, does mean we (as a community) gloss over some of these per-game differences. We agree this is an important problem the community should pay more attention to and will add a note to this effect in our discussion. ## Why larger batch size only improves for smaller learning rate? The default learning rate used (5e-05) was one optimized by prior work for a batch size of 32. It has been previously shown that one should reduce learning rates when increasing batch sizes [11], which is consistent with our findings (e.g. reducing the learning rate can be beneficial when increasing the batch size). # References * [1] Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Ried-miller. Maximum a posteriori policy optimisation. In International Conference on Learning Representations,2018. * [2] Fortunato, M., Azar, M. G., Piot, B., Menick, J., Osband, I.,Graves, A., Mnih, V., Munos, R., Hassabis, D., Pietquin,O., Blundell, C., and Legg, S. Noisy networks for exploration. InProceedings of the International Confer-ence on Representation Learning (ICLR 2018), Vancou-ver (Canada), 2018. * [3] Jianye Hao, Tianpei Yang, Hongyao Tang, Chenjia Bai, Jinyi Liu, Zhaopeng Meng, Peng Liu, and Zhen Wang.Exploration in deep reinforcement learning: From single-agent to multiagent domain.IEEE Transactions on Neural Networks and Learning Systems, pages 1–21, 2023. * [4] Plappert, M., Houthooft, R., Dhariwal, P., Sidor, S., Chen,R. Y., Chen, X., Asfour, T., Abbeel, P., and Andrychow-icz, M. Parameter space noise for exploration. arXivpreprint arXiv:1706.01905, 2017 * [5] Abhishek Gupta, Russell Mendonca, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Meta-reinforcement learning of structured exploration strategies, 2018 * [6] Eberhard, O., Hollenstein, J., Pinneri, C., and Martius, G.Pink noise is all you need: Colored noise exploration in deep reinforcement learning. InDeep ReinforcementLearning Workshop NeurIPS 2022, 2022 * [7] Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 449–458, 2017. * [8] Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining Improvements in Deep Reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. * [9] W. Dabney, M. Rowland, Marc G. Bellemare, and R. Munos. Distributional reinforcement learning with quantile regression. In AAAI, 2018. * [10] Nino Vieillard, Olivier Pietquin, Matthieu Geist. Munchausen Reinforcement Learning. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020). * [11] D.Randall Wilson and Tony R. Martinez. The general inefficiency of batch training for gradient descent learning. Neural Networks, Volume 16, Issue 10, December 2003, Pages 1429-1451. --- Rebuttal Comment 1.1: Comment: Thank you for your response. All of my comments have been addressed, and I maintain my score. When I asked why smaller batch sizes decrease performance in 22/60 games, I should've been more specific. I meant to ask if these 22 games had anything in common that might explain why small batch sizes don't help? For instance, Fedus et. al [1] noted that most tasks saw an increase in performance when the replay buffer contained "younger" data (i.e data form more recent policies), though hard exploration tasks saw a significant drop in performance. [1] Revisiting Fundamentals of Experience Replay. Fedus et. al. ICML 2020. --- Reply to Comment 1.1.1: Title: Commonality between 22 games Comment: Thank you for clarifying your point about the 22 games, which is a good question to raise. While we don't observe any clear game characteristic that would be indicative of whether it would benefit from smaller batch sizes, we do observe some commonality between some of the algorithms we considered. For example, in both YarsRevenge and JamesBond a batch size of 8 does the worst for QR-DQN (see Figure 15 in the appendix), which is also the case for IQN (see Figure 19 in the appendix). We are generating per-game plots for some of the other figures generated in the main paper and include them in the appendix, as these per-game results can prove useful for investigating commonalities across games, as you suggest. We will expand our discussion to include these points, thank you for suggesting it!
Summary: The work investigates the influence of replay batch size in experience replay for online reinforcement learning. The key finding is that reducing the batch size can be more beneficial, which contradicts common knowledge about regular deep learning. Strengths: The paper expands the analysis of an underinvestigated observation that can potentially change the default settings for experience replay and reduce the computational cost of further experiments. The authors provide new insights into the computational impact of batch size in experience replay and analyze network optimization dynamics. One of the strengths of the paper is its extensive experiments conducted with different settings and architectures. Weaknesses: Experience replay is also used in continual learning. It is surprising that the authors missed the paper that already drew the same conclusion [1], but kept it underinvestigated. Nonetheless, it should be mentioned in the related work section. [1] Wołczyk, M., & Krutsylo, A. (2021). Remember More by Recalling Less: Investigating the Role of Batch Size in Continual Learning with Experience Replay (Student Abstract). AAAI Conference on Artificial Intelligence. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review of our submission, and for pointing us to the paper on experience replay in continual learning, it is indeed quite related. Wołczyk, M., & Krutsylo, A. (2021) investigate the dynamics of experience replay in online continual learning, and focus on the effect of batch size choice when sampling from a replay buffer. They find that smaller batches are better at preventing forgetting than using larger batches, contrary to the intuitive assumption that it is better to recall more samples from the past to avoid forgetting. Additionally, the authors show that this phenomenon does not disappear under learning rate tuning. Their settings are similar to those used to generate Figure 3 in Sokar et al., 2023 [1], and suggest that target non-stationarity (e.g. bootstrapping) may have a role to play in explaining the small batch size effect we are observing. We will add this to our discussion. # References [1] Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, and Utku Evci. The dormant neuron phenomenon in deep reinforcement learning. In ICML, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am glad to see this improvement, and I have no further questions.
Summary: The authors study how batch size affects RL performance, and argue that a reduced batch size might (quite surprisingly) bring better performance improvement in a number of settings, in particular for QR-DQN, a smaller batch can lead to much better performance (almost doubling the performance). Different batch sizes are tested in Atari environments together with other hyperparameter changes. The authors also point out a benefit of using a smaller batch size is the reduction in wall clock time. The paper focus on empirical study and analysis, and provide a list of interesting findings. Strengths: **originality** - the paper dedicates to bring a better understanding of effect of smaller batch sizes with extensive empirical studies, although there are already past works that study the effect of batch sizes, the results in this paper bring some new findings and observations and can be considered novel contribution. **quality** - overall presentation is good, but some arguments can be improved. - extensive experiments and ablations are great **clarity** - Overall paper is clear to read. And the structure is easy to follow. **significance** - the observations presented in the paper can be interesting to the research community and help us better understand the effect of different batch size settings - the authors argue that a smaller batch size can bring wall-clock time reduction and given the results they also have potential to bring better performance, which is a result that can be helpful towards better algorithms Weaknesses: Major concerns: **related work** - the authors claim "Surprisingly, to the best of our knowledge there have been no studies exploring the impact of the choice of batch size in deep RL." well, there are indeed some works that touch on this issue, for example to list a few: - Accelerated Methods for Deep Reinforcement Learning by Adam Stooke and Pieter Abbeel. - Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control by Islam et al. - Shallow updates for deep reinforcement learning by Levine et al. - An Empirical Model of Large-Batch Training by McCandlish et al. - these are older papers, please do some search on google scholar and have a better discussion of related works. Some of these works found larger batch size to be more beneficial. The authors need to spend a bit more effort in looking at related work and try to explain the discrepancy. **Arguments and conclusions made in the paper** - Some of the arguments can be improved, for example line 95: "In Figure 3 we can observe that, in general, reduced batch size results in improved performance." I don't think this is true, in Figure 3 there are 8 curves in 4 figures that have lower batch sizes than default, and 4 out of these 8 curves have weaker performance than the default, while the other 4 one can argue they are stronger or slightly stronger than default. One can argue in QR-DQN batch size of 8 is really good, but from this figure alone I don't see how it's a general trend. - Line 226 Figure 11 (third column), I am not convinced there is a clear correlation between batch size and gradient norm. The variance is high, and for Asteroids and SpaceInvaders it seems in late stage of training they start to get higher gradient norms. - Line 242 authors argue that " it is possible that the network is better able to adapt to an earlier rank collapse than to a later one." I am not sure how this argument is made, figure 11 col 5 shows that Srank does not positively correlate with performance at all. It seems this collapse has no negative effect on the training or even indicates stronger performance, which is again kind of going against what has been argued in previous literature. And the argument that "Smaller batch sizes seem to result in networks that are both more expressive and with greater plasticity." seems to be entirely wrong to me, as figure 11 shows on SpaceInvaders, small batch size of 8 has lowest srank and highest percentage of dormant neurons. - Overall, I found a number of the points made in paper to be only supported by very weak evidence and they are not convincing. The authors might want to either modify the arguments into more accurate ones, or try to find better evidence to support the arguments. If the evidence is weak, the conclusion might not hold at all and it could be really coming from randomness or because of excessive fine-tuning on a particular algorithm on some particular environments. The paper is currently lacking in these 2 aspects, but can be a good paper if these concerns are properly addressed. Minor concerns: - **originality** the novelty of the work is reduced by the fact that it is focused on studying existing methods, but mitigated by the novel empirical results, ablations and analysis. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Suggestions as discussed in weakness section: - spend more effort on related work and explain discrepancies from conclusions in the previous literature. After looking at these previous findings, why do you think your results are different from these findings that mostly say larger batch sizes are better? - go through the arguments made in the paper, modify them to be more accurate or bring in better evidence to support them. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a careful read of our submission, and concrete suggestions for improving it. We address each point separately below, referencing the PDF attached to the general rebuttal at the top. We hope our responses, and the amendments we will make to our paper based on the points raised, are sufficient to address your concerns. Please let us know if you feel something was not properly addressed. ## Related work We thank the reviewer for pointing out these papers, which had escaped our attention (with the exception of S&A, which we were already citing). They are quite relevant to our work, so we will expand our discussion to include them accordingly (specifics below). We will also soften the wording regarding no prior studies on batch size in RL. Some discussion points on the works suggested: * S&A focus on distributed training, which have very different learning dynamics than those considered in our work. Indeed, Stooke & Abeel emphasize _increasing_ batch size to obtain better performance. * Islam et al. focus on two policy-gradient methods (DDPG and TRPO), which again have quite different learning dynamics than the value-based methods we consider. Their findings seem to suggest that _larger_ batch sizes yield better performance, which contrasts with our findings. To investigate further, we ran some experiments on MPO [1] (an off-policy value-based method that still shares some of the benefits from on-policy policy-gradient methods like TRPO, see Figure 3 in rebuttal PDF). Our results suggest there is also a tendency towards improved performance with smaller batch sizes; we suspect this advantage is due to it being value-based, but more investigation is necessary. The evident differences on the effect of batch size on value-based versus policy-gradient methods are certainly worth discussing, and we will add a discussion to this effect, along with our new results. * Levine et al. focus on "shallow" RL (e.g. using linear approximators instead of deep networks), and find that larger batch sizes yield better performance. * McCandlish et al. seems to focus on very large batch sizes; the smallest batch size considered is 64, and they go as high as the millions for Dota 5v5. For their Atari results (most comparable to ours), the authors focus on A2C (another asynchronous policy gradient method) and find that the best batch sizes are 100-1000 early in training, and 400-8000 later in training, which are substantially larger than what we considered. Based on the reviewer suggestion, we will also be referencing and comparing our work to [2], which explores the connection between batch size and importance sampling, and [3,4] which discuss the impact of gradient norms on training (which our analyses show are connected to batch size choice). ## On 4/8 curves having weaker performance than default While true, it is worth noting that DQN in this figure does not have multi-step returns; as we discuss and analyze in section 4, we don’t observe improved performance with smaller batch sizes in DQN without multi-step returns. Indeed, as can be observed in Figure 10, both smaller-than-default batch sizes yield improved performance for DQN when multi-step updates are used. We will clarify this point in our discussion. ## Correlation between batch size & gradient norm For computational reasons, we focused on three games for our submission. However, we ran this analysis on five extra games (see Figure 1 in rebuttal PDF). These show a stronger correlation between batch size and gradient norm. Nonetheless, we will modify our discussion to soften the claim, given your concerns, and include the extra figures in the appendix as further evidence of the connection. ## Connection between srank, plasticity, batch size, & performance The point we were making was based on the observation that in all three games we observed improved performance with reduced batch sizes, _and_ in all three games we observed an early collapse in srank. This is of course a correlation, and not necessarily a causal relationship, but we felt it was worth remarking on. The comment about plasticity was with regards to the discussion in the text: “although the relationship with batch size is not as clear as with some of the other metrics, smaller batch sizes appear to have a much milder increase in their frequency”. As Sokar et al. [5] showed, the level of dormant neurons increases (mostly) monotonically throughout training (as the black line for batch_size=32 shows); with smaller batch sizes we have a milder increase throughout training, although we do see them start with a higher fraction of dormant neurons. In both cases we can soften our claims and add the points made above to the discussion. ## Why are our findings different than those saying larger batch sizes are better? As discussed above, there appears to be a correlation between the use of distributed and/or policy gradient methods and improved performance due to larger batch sizes. Our work explores non-distributed value-based methods. As mentioned in the discussion above, we will highlight this point further in our discussion. # References * [1] A. Abdolmaleki, J.T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. Riedmiller. Maximum a posteriori policy optimisation. ICLR 2018 * [2] T. Lahire, M. Geist, and E. Rachelson. "Large batch experience replay." arXiv preprint arXiv:2110.01528 (2021) * [3] P. Mi, L. Shen, T. Ren, Y. Zhou, X. Sun, R. Ji, and D. Tao. "Make sharpness-aware minimization stronger: A sparsified perturbation approach." NeurIPS (2022) * [4] X. Zhang, R. Xu, H. Yu, H. Zou, and P. Cui. "Gradient Norm Regularizer Seeks Flat Minima and Improves Generalization." (2022) * [5] G. Sokar, R. Agarwal, P.S. Castro, and U. Evci. The dormant neuron phenomenon in deep reinforcement learning. In ICML, 2023 --- Rebuttal Comment 1.1: Title: Are your concerns addressed? Comment: Dear reviewer, Given that the discussion period with authors is almost over, we wanted to reach out to see if there were any of your concerns you felt were not properly addressed, so that we may have time to respond to them if so. If all your concerns have been addressed, we would invite you to revise your score accordingly. Once again, thank you for the careful review of our paper!
Rebuttal 1: Rebuttal: Dear reviewers and (S)ACs, we are attaching a PDF with three figures that we reference in each of our reviewer-specific rebuttals. The figures are: **Figure 1:** Gradient variance analysis (with corresponding reward curves) for five extra Atari 2600 games, that help strengthen the claim of correlation between batch size, gradient variance, and agent performance. This is mostly in response to reviewer KbLx. **Figure 2:** Effect of batch size on two classic control environments, which are state-based (as opposed to pixel-based, like Atari 2600 games). This is mostly in response to reviewer scQb. **Figure 3:** Effect of batch size on MPO evaluated on DM-control. This is mostly in response to reviewers KbLx and scQb. Pdf: /pdf/8c95442a040299ed402c6b30746957b7bbc012d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Progressive Ensemble Distillation: Building Ensembles for Efficient Inference
Accept (poster)
Summary: The paper addresses the problem of obtaining an ensemble of small models suitable for flexible inference requirements and anytime inference, somewhat similar to cascading classifiers. A key contribution is the derivation of a weak learning condition for the distillation of a pre-trained to an ensemble of smaller students as well as an algorithm to obtain such an ensemble. The students are allowed to reuse intermediate activations of other students to efficiently expand the student model hypothesis set. The method is supported by theoretical results on the generalization error, and empirical results on classification tasks for both synthetic, vision, and sensor data. Strengths: - The paper is well-written, clear, and well-organized. It is mostly easy to follow and understand the argumentation. - The proposed technique is supported both theoretically and empirically, which is a strong feature. Weaknesses: - The computational requirements during training are unclear. Since Algorithm 2 requires the fitting of potentially multiple models to obtain each student it appears to potentially be quite expensive (especially with large $R$ and/or large `max-search`), but this is not addressed explicitly in the paper - The dimensions of $K_t^-$ and $K_t^+$ are $N \times L$ and especially for large $N$, this could be a bottleneck during training (granted, for a mini-batch this is less severe). Furthermore, while reusing stored activations of previous students in subsequent students might keep the parameter count stable, it also requires additional memory and carefulness in which activations to store. Thus there is some overhead on storing and loading activations and $K$-matrices throughout training. - The empirical results are weak at comparing to other baseline methods, and the method is struggling at the TinyImageNet and ImageNet-1k tasks, where additional inference time is required compared to the teacher. Minor: - L14/15: Claiming distillation is a rigid procedure seems too bold, as a multitude of distillation techniques exists providing options to obtain lots of different students. Granted, most are aimed at obtaining a single student, but distillation in general is very flexible. - L121: "proabability" -> "probability" - Algorithm 1: $R$ is not specified in the algorithm, but needs deduction from Section 3.3 - Inconsistent use of RESCHED and RESHED - Formatting of B-Distill and E-RNN are inconsistent in different places in the paper (e.g. L300-309). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Generally, my largest concern is the lack of other baseline methods for the empirical results and the unclear computational requirements during training. Additionally: - In Section 5.3 (and Figure 2) it appears that FLOPs are converted to inference time, but it is unclear how, and if this conversion actually holds for an actual implementation. Consider measuring the actual inference time instead. - Clarify the computational requirements during training. What is the overhead on memory and training speed when storing and loading activations and $K$-matrices? - Include common distillation techniques and other methods for early-exit or anytime inference as baselines. E.g. it is unclear how NO-RESHED compares to other distillation schemes with appropriately sized students, or if existing anytime inference techniques surpass the proposed. Minor: - Some dimensions of e.g. $x_i$ and $f(x_i)$ are not clear from the paper (but can be deduced), and e.g. in L119, what indices are $j$ summing over? Consider introducing dimensions more clearly early on. - Figure 2: Since the $x$-axis is the fraction of teacher inference time, the teacher should be marked by a dot and not a line, since the teacher is not able to perform inference at every possible inference time. It should be clear that the teacher is not flexible in inference time. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and questions. **Inference time calculations.** The inference time numbers are end-to-end numbers reported by the DeepSpeed profiler on NVIDIA 3080Tis. The No-RESCHED baseline is used as an idealized baseline, and a modified version trained using distillation will improve on the numbers reported now. In both cases, we expect this baseline to be better than B-DISTIL. We use the training based baseline instead of the distillation as the teacher model is a natural case of baseline at `frac_inf = 1.0`. Large values of `max-search` could indeed lead to an increase in training times if FIND-WL fails often, which is not typically a concern for larger models (later rounds). Models in earlier rounds are often relatively small, and we do see additional time required for training as for these models the data movement often becomes the training bottleneck. **Scalability and compute requirements.** Although B-DISTIL takes additional inference time to make accurate predictions for ImageNet, the teacher models in this case have 100+ layers. Tasks in efficient inference that rely on smaller models for image datasets, or scenarios (e.g., edge inference) involving sensor/audio data streams, are key application areas that stand to benefit from B-DISTIL. However, even at larger scales, where the data distribution during inference is skewed towards 'easier' samples, B-DISTIL can be advantageous in the average case, as the majority of the predictions can be completed quickly (Google-13, Table-1, T=50%). In fact our goal in providing the ImageNet scale experiments was to demonstrate scalability of our implementation, especially since maintaining and updating the weight-matrix of a 1000 class, 1M training image dataset can be non-trivial. We will clarify this in the main text and add a discussion about the techniques we use to manage compute requirements (example, streaming weights off-disk asynchronously [code: `ddist.data:DataFlowControl`], performing weight updates in log-space [code: `ddist:ClfPlayer.log_space_update()`], using a shared-memory object store [code: `ddist.dispatch` utilizing `ray` backend]) in the appendix. **Baseline methods.** The NO-RESCHED baseline is trained using standard distillation on the same model structure used by B-DISTIL. If we were to compare the models at fixed $T$, a single model trained using standard distillation techniques does outperform B-DISTIL (Figure 2) especially for later rounds. However, this baseline cannot be deployed in practice for anytime inference as it requires knowledge of available inference time upfront to pick the ‘right model’. We demonstrate that B-DISTIL remains competitive to this baseline while being realizable in practice. As for early prediction, the E-RNN method we compare against (Table 1) is a standard method used for early prediction in sequential inference. We will clarify these points in the main draft. **Minor.** Thank you for the suggestion on the horizontal line for the teacher model and other corrections/typos, which we will carefully address in our revision. We will also revise the draft to use only one unit of measurement, and improve and move details about the profiling step to the main draft. Thanks very much for these suggestions to improve presentation. --- Rebuttal Comment 1.1: Comment: You write *"The No-RESCHED baseline is used as an idealized baseline, and a modified version trained using distillation will improve on the numbers reported now."*, but later also write *"The NO-RESCHED baseline is trained using standard distillation on the same model structure used by B-DISTIL"*. I get the intuition behind the NO-RESHED baseline (and why it is a difficult baseline), but it should be clearer what the actual training procedure is and/or what architectures are used for each time. --- Reply to Comment 1.1.1: Title: Response to comment by HW4G Comment: We apologize for the confusion. When we say the NO-RESCHED baseline is trained using distillation on “the same model structure as the teacher model”, what we mean is that if the teacher model is based on ResNet architecture then the student model in this baseline will also be from the same family. The model configuration for the student model at a specific round $T$, for instance the number of layers, number of blocks, etc are chosen so that the inference time is comparable to the ensemble of models produced by B-DISTIL at the end of the same round. Standard distillation is used to train this student model (i.e., training against soft logits). However, instead of considering a single deep ResNet model of appropriate size, we could also consider randomly re-initializing and re-training all the parameters in the ensemble produced by B-DISTIL. As we mention, we include the former baseline as the teacher model is a data point on this plot at `frac_infr=1.0`. Moreover, for a specific $T$, a single ResNet model is denser and deeper (less capacity gap) when compared to the ensemble structure; while the ensemble structure at $T$ has a similar compute requirement, it typically contains relatively shallower models which _could_ cause a drop in performance. We will add a clarification of this point in the main text and specify the exact model configuration used for this baseline in the appendix.
Summary: This paper studies the problem of "progressive distillation": Given a large teacher model, the task is to decompose into smaller student model so that progressively evaluating additional models in this ensemble results into more accurate predictions. Τhe main contributions of this paper are: (i) A principled approach called B-DISTIL for approaching the progressive distillation problem: The authors formulate a two player zero-sum game, from which they derive a weak learning condition. B-DISTIL approximately solves this game. (ii) Theoretical guarantees for the proposed approach under certain assumptions. Strengths: Principled approach with theoretical guarantees that seems to perform well in real-world settings. Weaknesses: The proposed approach seems somewhat sophisticated — perhaps not very easy to implement even. By reading the paper, it was not clear to me whether there exists a simpler (but non-idealized) baseline that could be used for comparison — mostly to reassure the reader that the introduced sophistication is actually necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there a simple (maybe standard Knoweledge-Distillation-based) baseline that is meaningful to compare against? For example, in Lines 70-72 the authors mention that "when performing distillation onto a weighted combination of ensembles, it has been observed that adding additional models into the ensemble does not dramatically improve performance over that of a single distilled model". While this could be the case, could such an approach be used as a simple baseline for this setting, so that one could see what are the trade-offs between implementing a simple approach and potentially improving performance by implementing a more sophisticated one like the one proposed by the authors? (I understand if the answer is "there's no simple way to approach this problem", but maybe then perhaps this should be mentioned more explicitly.) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have explained the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments and feedback. Although our algorithm seems sophisticated, we note that most of the additional sophistication (on top of standard boosting) is restricted to FIND-WL sub-routine. Conceptually, the high-level algorithm has the same flavor as many boosting methods, where the existence of a subroutine equivalent to `FIND-WL` is assumed without specifying details. However, scaling to large training datasets is non-trivial even for standard boosting due to the weight-matrices. The ImageNet scale experiments demonstrate the scalability of our implementation, where we use various techniques (example, streaming weights off-disk asynchronously [code: `ddist.data:DataFlowControl`], performing weight updates in log-space [code: `ddist:ClfPlayer.log_space_update()`], using a shared-memory object store [code: `ddist.dispatch` utilizing the `ray` backend]) to manage compute requirements. Regarding baselines, a simple baseline that only uses traditional distillation is the baseline consisting of many small distilled models, sequentially evaluated till interrupted. We have included this baseline, which we call "RESCHED", in our experiments (Figure 2). Generally, such an approach, which is basically a weighted combination of distilled models, is known to be not very effective. We will make this more explicit in the main draft. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I stand by my original (positive) assessment.
Summary: This paper proposes B-DISTILL, a progressive distillation algorithm that allows for easy trade-off between accuracy and inference-time/latency at runtime. By modeling knowledge distillation as a zero-sum game problem, B-DISTILL utilizes the intermediary connection modules to train and aggregate the sub-student models progressively, resembling the traditional boosting methods. The paper provides mathematical proofs that guarantee the convergence and generalization of B-DISTILL. The experimental results demonstrate the efficiency of B-DISTILL in both anytime inference and early prediction tasks. Strengths: 1. The paper presents a novel perspective by redefining the knowledge distillation problem and effectively applying it to the tasks of anytime inference and early prediction. 2. The paper provides complete mathematical proof and experimental validation to support its claims. Weaknesses: 1. While the method proposed in this paper introduces a novel perspective, its application scope and advantages appear to be quite limited. 2. It seems that some dynamic network structures could potentially be used to address the anytime reference problem. However, it appears that the paper lacks a comparative analysis with relevant methods in terms of results. 3. It might be worth considering modifying the title. B-DISTILL is more like a training method specifically designed for efficient inference rather than a knowledge distillation-related approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would like to know the role of knowledge distillation in this context. Why not directly use ground truth (gt) as the fitting target? 2. What results would be obtained if the teacher model is directly replaced with softened labels? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for constructive feedback and questions, which we hope we have addressed in our response below. **Application scope.** We kindly disagree with the reviewer that the scope and advantages of our work are quite limited. Due to resource constraints in efficient inference applications it is common to leverage small to medium scale models as a starting point, e.g., models with fewer than hundred layers for image tasks, or small models for sensior/audio data streams (see, e.g., [1, 2]). These are applications/scenarios that stand to benefit significantly from the flexibility and efficiency enabled by our approach, B-DISTIL. However, even at larger scales, where the inference time data distribution is skewed towards 'easier' samples, our results show that B-DISTIL can be advantageous in the average case, as the majority of the predictions can be completed quickly (see for example Google-13, Table-1, T=50%). **Dynamic Networks.** We thank the reviewer for pointing out dynamic networks, which is a broad term that can refer to many potential approaches for dynamically adjusting model structure or parameters (see e.g., the survey [3]). At a high level these are categorized into methods that depend on input samples, training procedure, or inference procedure. The most related works to ours are those that depend on samples and dynamically adjust the model architecture (e.g., early exit, layer skipping schemes). Relative to existing methods from this category, our work provides a principled way of performing decomposition in the presence of large capacity gaps and early exit requirements, which can apply to both anytime inference and early exit problems, and is not tied to a specific modality (e.g., image, language). We have already discussed some of the most related approaches from the broad class of dynamic networks in our related work section (e.g., Huang et al. 2018, Ruiz and Verbeek, 2020 (HNE)) and have compared empirically to E-RNN, a representative approach, but will include a broader discussion that better positions these methods in the context of dynamic networks more generally. **Role of distillation.** We employ distillation instead of just the ground-truth labels to make learning with smaller capacity models possible. The temperature smoothening step of distillation combined with directly optimizing on the results aids our training procedure, particularly in early rounds where the capacity gap between the teacher model and the student model is high. We could certainly replace the teacher model with the smooth labels produced by it for later rounds of B-DISTIL. However, by doing so we lose the effects of non-deterministic preprocessing steps, for instance random rotations and crops for images, on the teacher logits. Distillation is therefore a key component of our approach, as reflected in the title and algorithm name. *[1] Machine learning at the network edge: A survey. Murshed, MG Sarwar, et al. ACM Computing Surveys (2021)* *[2] Visual Wake Words Dataset. Aakanksha Chowdhery, Pete Warden, Jonathon Shlens, Andrew Howard, Rocky Rhode. Arxiv: 1906.05721 (2019).* *[3] Dynamic Neural Networks: A Survey. Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, Yulin Wang. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).* --- Rebuttal Comment 1.1: Title: Additional questions Comment: Thank you again for taking the time to review our submission. We hope our responses have resolved the reviewer's concerns but we are happy to discuss further if the reviewer has any additional questions. --- Rebuttal Comment 1.2: Comment: Thank you for your detailed response. I appreciate that you have addressed some of my concerns, and as a result, I am willing to raise my score.
Summary: The authors describe a new method for knowledge distillation with an ensemble of lower capacity student models, and draw connections between their method and classical boosting approaches. They provide a theoretical analysis of the risk of this method, and demonstrate the benefits of their approach in learning tasks with a limited inference budget, either in terms of computing cost or speed of inference. Strengths: A compelling idea. Approaching the task of ensemble distillation through a boosting perspective is to my knowledge a novel idea, and the connections that can be made to the boosting literature as a result are quite interesting. At face value, the B-DISTIL algorithm does not seem limited to the distillation setting, and it would be interesting to understand how it compares more generally to other boosting algorithms. Weaknesses: As an overall weakness, I found the presentation of the paper to be very confusing. In particular: - The relationship between two-player games and boosting needs to be better described in the related work and problem formulation. Schapire and Freund do not simply "show that weak learners can be aggregated to produce strong learners" (lines 90,91) but rather establish key correspondences between the formulation of two player games and boosting algorithms. Language from this correspondence are used throughout section 3.1 ("players,", "minimax value of the game", "ensemble of predictors","weak learners") without a clear description of how two player games and boosting relate to one another, and make the exposition difficult to follow. - Notation needs to be more carefully defined throughout the text. For example, the constants $L,M,N,R$ are all used without explicit reference to what they represent in the main text (see questions below for more notation issues). - It is unclear to me how student models are constructed. Do the configurations in Tables 2-7 provide specifications for the student models used? Or rather for $\mathcal{F}_0$? Are the different rows of the table different base models, or do they somehow relate to the use of intermediate layer connections? How are intermediate connections implemented in each of the specific model architectures described? Overall, I believe that the quality of the paper suffers significantly from issues with presentation, and I am willing to reconsider my score if these issues are addressed. Another weakness of this work is comparison to previous work with respect to experimental findings. It would be useful to know how the results in Figure 4 compare to anytime inference as described in Huang et al. 2018 or Ruiz and Verbeek, 2020 for image classification results. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - What is an (ordered) set of low inference cost hypothesis classes? (Lines 115-117). Does this correspond to an ensemble with an increasing number of member models? If so it should be clearly stated that this is the case. How does $\mathcal{F}$ relate to $\mathcal{F}_m$? (Line 121). - How does $\mathcal{F}_r$ relate to $\mathcal{F}_0$? How is $\mathcal{F}'_r$ related to $\mathcal{F}_r$? - On first glance, the weak learning condition (Definition 1) appears to me quite different from other weak learning conditions in the traditional boosting literature. Is there an interpretation of this condition that is consistent with other definitions of weak learning? This would be good to know. - The claim "Existing boosting methods for classification treat multi-class settings (L > 1) as L in stances of the binary classification problem (one vs. all)" (Lines 162-166) does not apply to Adaboost.M1. Are the weak learners that you study here unable to meet the weak learning condition for Adaboost.M1? If they meet this condition, it would be a useful baseline against which to compare the performance of this method. - In Figure 3, it would be useful to know the correspondence between the total number of FLOPS required by the ensemble, and the corresponding model accuracy. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: As the authors state, some limitations of their methods include the need to design the class of student models, and the potential additional cost of evaluating models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your many great suggestions for improving the presentation of our work, particularly the problem formulation. **Boosting, two player games: relationship, terminology and notation.** Due to space limitations we shortened the exposition on boosting, zero-sum games and weak-to-strong learning. However we agree with the reviewer that the relationship between these notions are much deeper and intricate than just “aggregating weak learners to strong learners”. In fact, the connections between online and multi-objective optimization and model compression, as seen for example in dense model theorems and regularity lemmas from complexity theory (see exposition by Luca Trevisan [1] and Theorem 2 in particular) was a key motivation for our work. We will include a detailed discussion on the connections between these notions in the appendix with the aim both to improve exposition and also to encourage the flow of ideas between the communities. We will also include an introduction of these ideas in the main text, casting classical boosting as a two-player game with the aim of introducing the terminology and notation. **Student models: ordered classes, connections and model configurations.** The configurations specified in Tables 2-7 are of the *unique* student models used for the distillation run. Note that the same student model configuration may be reused in successive rounds. The corresponding hypothesis classes are ordered from low-inference latency to high when moving from top to bottom in these tables. The connections used are implemented by overloading the forward pass of the candidate models at training time [code: `ddist.candgen:op_add_connections`]. The connections themselves are all-reduce-like (stateless) operations [code: `ddistexps.autosearch.gen_fwds`]. $\mathcal{F}_r$ and $\mathcal{F'}_r$ are recursively defined with $r=0$ specified by $\mathcal{F}_0$. We will update the tables to make this clear and include a discussion of how the connections are implemented to the appendix. **Weak learning condition and AdaBoost.M1.** At a conceptual level, both the notion used in our paper and the standard weak learning guarantees are very similar. Both guarantees require that for an arbitrary reweighting (represented by K in our papers) we are able to find correlations between the hidden function (represented by the labels in the usual boosting set up and by the teacher network in our set up). From this perspective, both algorithms can be seen as “boosting” weak correlations to strong correlations through iterative reweighting. For binary classification, B-DISTIL and AdaBoost can be shown to have the same weak learning condition. For multi-class settings, they have a similar weak learning condition, which strictly speaking is stronger than the weak learning condition used in the binary classification setting (see, Section 5 in [2]). The main difference between the methods is that while AdaBoost.M1 only considers the prediction outputs w.r.t labels, $I(y_i == h(x_i))$, we work with the teacher logits and the difference between the logits $f(x_i) - h(x_i)$. Adaboost.M1 abstracts away the details of finding a weak learner for a particular data distribution as part of a subroutine WeakLearn [2]. As we note in the main draft, finding weak learners in this sense is difficult [3], but since we use a smooth loss and a weighting tied directly to logits, we enjoy the advantage of being able to employ distillation directly on the residuals and the weak learning condition (Equations 6-7) in FIND-WL. We will add the correspondence between the weak learning conditions into the appendix. We will also modify Figure 3 to also include actual values; thank you for this suggestion. The required FLOPS values (in million FLOPS) profiled for one vision and time-series data in Figure 3 is provided below. *CIFAR10* | Model | Connections | |-----------|--------------| | 7.37 | 0.0 | | 44.96 | 0.14 | | 63.2 | 0.71 | | 63.2 | 1.53 | | 63.2 |2.34. | *Google-13* |Model | Connections | |---|--| |0.084 | 0.000 | |0.655 | 0.013 | |1.051 | 0.058 | |1.83. | 0.120 | |1.82 | 0.211 | **MSDNets and HNE:** Thanks for bringing up Huang et al. 2018 (MSDNets) and Ruiz and Verbeek, 2020 (HNE), which we cite in our related work. We do not compare to these directly as they can essentially be seen as restricted variants of our approach. By picking the base class as dense networks at various scales as in Figure 2 (Huang et al. 2018), and connections as dense connections our algorithm can recover MSDNets from Huang et al. 2018. Similarly, by picking the base class as root nodes (Figure 1 Ruiz and Verbeek, 2020), and connections as binary connections, we recover an HNE. Our method formalizes this intuition of a graph comprising of a base-class and a specific connection, and provides theoretically motivation within the framework of boosting and two player games for a training procedure to construct such structures. This allows us to generalize these notions, consider a broader set of connections (ex. residual connections) and base classes (ex. recurrent networks) and data modalities, while taking into account their implementation costs when deciding on the final structure. The two mentioned works are closely tied to image datasets for anytime inference and are not readily applicable for sequential inference. They are also only designed to optimize for latency. Finally, we note that we do compare against the more related approach of E-RNN in our early-prediction experiments. *[1] Online Optimization: Regularity Lemmas. Luca Trevisan, In-theory (Online), 2019.* *[2]: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Yoav Freund and Robert E. Schapire . Journal of Computer and System Sciences (1996).* *[3]: Generalized Boosting. Arun Suggala, Bingbin Liu, Pradeep Ravikumar. NeurIPS (2020).* --- Rebuttal Comment 1.1: Title: Thank you. Comment: Thank you for your response. I appreciate the detailed answers to the points mentioned, and I believe my concerns about presentation will be addressed by the revisions suggested by the authors in their rebuttal. I especially support the revisions suggested around the expositions of student models. I have updated my score to reflect these revisions.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The main focus of this paper is to address a problem in progressive knowledge distillation, which involves approximating a single large teacher model by utilizing an ensemble of multiple smaller student models. The authors propose an algorithm called B-DISTILL to tackle this specific problem. One notable advantage of this methodology is its capability to effectively balance the trade-off between cost and performance by adjusting the ensemble size of the student models. Strengths: 1. The problem formulation of "progressive knowledge distillation" is intriguing and well-motivated. In conventional knowledge distillation approaches for model compression, small student models of fixed sizes are typically employed, resulting in a fixed inference cost. A notable advantage of the proposed methodology is its ability to dynamically adjust the inference costs based on the available resources, which is a clear strength of the approach. 2. While the concept of approximating a function using a combination of multiple functions is not novel (as evident from classical boosting methods mentioned by the authors), this paper provides a distinct contribution by connecting these ideas to the field of knowledge distillation. Weaknesses: 1. The scalability of the proposed methodology appears to be somewhat limited. It was anticipated that B-DISTILL would achieve a similar level of performance as the teacher model while utilizing the same inference cost. However, when applied to TinyImageNet and ImageNet datasets, B-DISTILL falls short of meeting this expectation. 2. One important baseline is missing - deep ensembles using the model structure considered in B-DISTILL. Including this baseline would provide a clear motivation for the progressive formulation adopted in B-DISTILL. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The proposed methodology is in line with the principles of slimmable networks (Yu et al., 2019), as it enables users to control the inference cost. Although there are distinctions in the primary categories of each approach, such as pruning and distillation, they share common properties in terms of model compression. Thus, it would be advantageous to include related works, such as slimmable networks, in the main text to ensure readers have a comprehensive understanding of the topic. 2. Could we adapt the inference cost based on the "difficulty" of the input? Given that there might be instances where accurate predictions can be made without the need for additional student models, the idea of limiting the ensemble size based on the difficulty is highly appealing. __Miscellaneous:__ 1. Typo: "B-DSTILL" in Figure 2. 2. Adjust the legend in Figure 3. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and positive assessment of our work. **Scalability:** Although B-DISTIL takes additional inference time to make accurate predictions for ImageNet, the teacher models in this case have 100+ layers. Tasks in efficient inference that rely on smaller models for image datasets, or scenarios (e.g., edge inference) involving sensor/audio data streams, are key application areas that stand to benefit from B-DISTIL. However, even at larger scales, where the data distribution during inference is skewed towards 'easier' samples, B-DISTIL can be advantageous in the average case, as the majority of the predictions can be completed quickly (Google-13, Table-1, T=50%). In fact our goal in providing the ImageNet scale experiments was to demonstrate scalability of our implementation, especially since maintaining and updating the weight-matrix of a 1000 class, 1M training image dataset can be non-trivial. We will clarify this in the main text and add a discussion about the techniques we use to manage compute requirements (e.g., streaming weights off-disk asynchronously [code: `ddist.data:DataFlowControl`], performing weight updates in log-space [code: `ddist:ClfPlayer.log_space_update()`], using a shared-memory object store [code: `ddist.dispatch` utilizing the `ray` parallelization library ]) in the appendix. **Baselines and slimmable networks**: Thank you for the reference on slimmable networks. We will include a discussion of this and similar works in the related work section. In particular, slimmable networks take the approach of adjusting the width at inference time based on on-device resource constraints. While we focus more broadly on network hyperparameters, both approaches do share similar ideas when thinking of them as trading off on-device performance vs. inference cost. Regarding baselines, the current NO-RESCHED baseline does use the same model structure for end-to-end distillation as used by B-DISTIL, without any activation sharing/connections. We will clarify this in the main text. We will also add end-to-end distillation performance using the same network architecture along with connections to the appendix. **Adapting inference cost based on difficulty of input:** We agree that limiting the ensemble size based on difficulty of the input is an exciting prospect. In fact, the application of our method to early-prediction in sequential inference can be interpreted as classifying based on difficulty of the input (Table 1). However, this is an intuition based argument and we leave exploring if the samples classified early have some notion of 'simplicity' as an interesting direction of future work. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' efforts and further insights. I keep my positive assessment.
null
null
null
null
null
null
Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis
Accept (poster)
Summary: The authors design robustness-aware quantization (RAQ) to speed up the noise estimation network by leveraging the robustness of early-stage diffusion models. Specifically, the authors found that the quality of generated images is less affected by the early-stage. Therefore, they reduce the bitwidth of activations for the early-stage, and maintain high-bit activations for the later-stage. Experiments show that the proposed method can speed up early computations while maintaining generation quality. Strengths: 1. The idea of the paper is simple yet effective, promoting the application of Post-Training Quantization (PTQ) in the diffusion model. 2. The analyses in the paper are extensive. The authors demonstrate the early-stage robustness through entropy transition across steps (Fig. 2) and noise injection. 3. The paper is well-organized and easy to read. 4. The authors also provide the code for results reproduction, showing the solidness of the work. Weaknesses: 1. In Tab. 1, RAQ only sets different bitwidths in five intervals, which is inconsistent with Algorithm 1, which sets different bitwidths in each step. Some explanation is needed. 2. The paper only reduces the activation bitwidth on the base of Q-diffusion. However, compared with LDM-4, the FID of Q-diffusion increased by 1.21 about LSUN-Bedrooms (256x256) in Tab. 1. Therefore, the activation bitwidth should not be reduced only. It is better to further apply RAQ to weight bidwidth to obtain a better trade-off between performance and efficiency. 3. The authors propose RAQ to accelerate the early-stage computation. However, the running time, FLOPs, and model size are not provided to demonstrate the effectiveness of the proposed method. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Can performance and computation be further optimized with finer-grained (beyond five intervals, even one interval per step) settings? 2. In Tab. 1, the FID of RAQ (3.99) with smaller activation bitwidth is better than Q-diffusion (4.17) about LSUN-Bedrooms (256x256). Please give some analyses about these results. 3. In Fig. 7, the generation quality of the W4A6/8 has a large difference compared with W4A8 and full precision (especially the two images in the lower right corner and upper left corner). This result is different from the quantitative comparison in Tab. 1, where RAQ has comparable (even better) FIDs but smaller activation bitwidth. Please give some explanations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors do not discuss its limitations or potential negative societal in separate section. It is better if the authors add some discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response ctiM-1: The granularity of bitwidth optimization** In the context of the RAQ method outlined in Algorithm 1, choosing a finer granularity for Bit_act update necessitates a larger number of sampled images for the optimization process. Meanwhile, our investigation revealed that consecutive timesteps within a 0.05T range exhibit similar sensitivity to noise injection. Therefore, our experimental configuration concentrated on optimizing the activation bitwidth with intervals of 0.05T for the Bit_act update. This approach ensures effective optimization while managing computational demands. On the other hand, despite optimizing the activation bitwidth with a granularity of 0.05T, some consecutive intervals present the same level of sensitivity to the quantization and thus share the optimal bitwidth. Therefore, the experimental results summarized in Table 1 are composed of only 5 intervals or even fewer intervals. **Response ctiM-2: Applying RAQ to weight bitwidth optimization** We agree with your opinion that, by implementing the RAQ method on weight quantization, it's possible to prevent FID increase in scenarios involving Q-diffusion with 4-bit weight quantization. However, it is important to note that applying the RAQ method on the weight quantization is not within the scope of this paper. The principal aim of this research is to enhance the diffusion inference process without introducing supplementary costs into the inference stage. This is especially achievable in the context of implementing the RAQ method on activation quantization, it does not entail any extra costs such as additional memory requirements. Conversely, applying RAQ to weight quantization increases the memory requirements due to the potential increase in the number of parameters, as we have to save copies of parameters with different resolutions. **Response ctiM-3: The effectiveness of the proposed RAQ** As the reviewer pointed out, the operation counts will be updated in the final version. We introduce the concept of Bit Operations (BOPs), where BOPs are calculated as the product of OPs with weight and activation bitwidths. This metric allows us to estimate the performance gain achievable through the RAQ method. In the case of LSUN-Churches, the full-precision baseline and Q-diffusion necessitates 4285.2 and 148.6 TBOPs (TBOPs: Tera BOPs) for a single image generation respectively, while the proposed RAQ method only requires 108.8 TBOPs. This indicates that the RAQ approach could potentially lead to more than 39.4 times speedup and energy savings compared to the full-precision baseline, while Q-diffusion can achieve 28.8 times improvement compared to the full-precision baseline. Table A-1. LSUN-Churches (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-8 |32/32 | 4.09 | 4285.2 | | Q-diffusion | 4/8 | 4.45 | 148.6 | | RAQ | 4/6 | 4.64 | 108.8 | Table A-2.LSUN-Bedrooms (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-4 |32/32 | 2.96 | 20725.8 | | Q-diffusion |4/8 | 4.17 | 681.2 | | RAQ |4/6 | 3.99 | 504.8 | **Response ctiM-4: The impact of bitwidth optimization with finger-grained settings** Thank you for the insightful question. However, further optimization is hardly achievable with finer-grained settings. As mentioned in **Response ctiM-1** regarding interval granularity, the optimization process was indeed carried out with a finer-grained approach. However, the optimization process resulted in only 5 intervals or even fewer intervals, because consecutive time steps exhibit similar sensitivity to quantization. Consequently, the potential for achieving further optimization through finer-grained settings becomes limited. **Response ctiM-5: Explanation on FID improvement with proposed RAQ on LSUN-bedrooms** As you pointed out, it is interesting that LSUN-Bedrooms images generated with the proposed RAQ exhibit slightly better FID scores compared to Q-diffusion with W4A32 and W4A8. To investigate the reason behind this FID improvement, we conduct a detailed comparison of the images generated using full-precision activations and 4-bit activations in the early stage of the diffusion process. process. We find that the models with full-precision activations sometimes generate images with complex structures that are not easily recognizable as bedrooms. However, when 4-bit activation quantization is applied to the early stage, it simplifies the complex structures and results in images that more closely resemble bedrooms. This observation suggests that the step-wise activation quantization strategy employed in the proposed RAQ method helps refine the generated images, leading to improved quality and better alignment with the target LSUN-Bedrooms dataset. The detailed discussion with sampled images are updated in Supplementary Material B.2. **Response ctiM-6: The generation quality of the W4A6/8 Stable Diffusion model** As you highlighted, there is a quality issue when generating high-resolution (512x512) images using the Stable Diffusion model with the W4A6/8 configuration.This is due to the intricate nature of capturing fine details in high-resolution images, which demands exceptionally accurate computations. Consequently, this case becomes more sensitive to quantization effects compared to the 256x256 image generation instances reported in Table 1. However, it is important to note that our experimental results consistently emphasize the effectiveness of deploying a mixed precision strategy through the RAQ method. Specifically, the mixed-precision quantization achieved by the RAQ approach (W4A6/8) significantly enhances the quality of the generated images when contrasted with scenarios where activation bits are fixed at 6 bits (W4A6). Thanks for the constructive comments. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I read the authors' responses. I also read the comments and rebuttals from other reviews. The authors' reply solves my concerns, including bitwidth, FID improvement, generation quality, and effectiveness of RAQ. But I don't understand why the authors use BOPs instead of some common metrics such as FLOPs and running time. And it would be better if the authors discuss the limitations of the work, which Reviewer 4DBQ also mentions. --- Reply to Comment 1.1.1: Title: Discussion on BOPs and the limitations of the proposed work (1/2) Comment: Thank you for the valuable comments. As you rightly pointed out, when evaluating the efficiency of neural network models, metrics such as FLOPs and running time are commonly used. However, several prior works on quantized neural networks, such as NIPQ [1], have adopted BOPs for evaluating the efficiency of quantized models for the following two reasons. Firstly, FLOPs measure the number of floating-point operations, while the proposed RAQ involves integer operations in processing diffusion models as both weights and activations are integer. Consequently, it is challenging to compare quantized models' efficiency with FLOPs. Secondly, while running time is a reliable efficiency metric for neural network models, accelerating diffusion models quantized with irregular bit widths on GPUs is also challenging due to the absence of corresponding arithmetic units. In the context of hardware, computation efficiency is intrinsically dependent on both operation count and bitwidth, as computing units are composed of multiple binary logics. The required number of these binary logics is influenced by both the operation count and the bitwidth. Hence, there is a direct correlation between BOPs and computation efficiency like energy efficiency and latency, especially in bit-scalable accelerators. In summary, we agree that our current RAQ results has limitation in showing improvements in running time on GPUs due to significant proportion of irregular activation bitwidths. To overcome this limitation, potential directions include the utilization of bit-scalable accelerators to improve running time, or the exploration of more sophisticated quantization schemes that could increase the portion of 4-bit activations, which can then be effectively accelerated on GPUs. [1] Shin, Juncheol, et al. "NIPQ: Noise Proxy-Based Integrated Pseudo-Quantization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: This paper proposes to quantize diffusion models to a different extent along the iterative process for image generation. The main motivation of the proposed approach is that diffusion model is robust to input distortion at early stages (i.e. noisy stages) of the iterative process. Therefore, the proposed approach starts with a 4-bit quantization, and gradually increase activation bits along the iterations. Experiments show that the proposed approach achieves improved performance with the same effective bitwidth. Strengths: 1. This paper has a good motivation. The empirical experiments show that it is legitimate to apply different rates of quantization at different stages of different diffusion process. 2. The proposed method effectively improves the performance with a reduced bitwidth, as shown in Table 1. 3. The idea is simple and is easy to follow. Weaknesses: 1. From Table 1, the bitwidth for each timestep is model-specific. That means optimization has to be done for each model. It would be good to have analysis on the robustness of the bitwidth selection. 2. As one of the main objectives is to improve the sampling efficiency, the comparison of runtime should be included. This is important for readers to understand the improvement brought by the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall, I think the dynamic quantization in this paper is a legitimate approach for improving the efficiency of diffusion model sampling. Please refer to the weakness sections for the questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please consider discussing the limitations and potential societal impact of the proposed approach in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comment. We will now discuss 1) the bitwidth selection for different models, and 2) improvement of the sampling efficiency with RAQ. **Response 4DBQ-1: The bitwidth selection for different models** As you correctly highlighted, the bitwidth optimization with the proposed RAQ is a model-specific optimization. The experimental results presented in Table 1 have been attained through the activation bitwidth optimization process as illustrated in Algorithm 1. The optimization of bitwidth is executed iteratively for each model, involving FID measurement. However, it is worth noting, as detailed in Section 4.3, that the proposed RAQ method holds the potential to reduce the optimization time required for model-specific bitwidth optimization from $O(m^T)$ to $O(m+T)$ **Response 4DBQ-2: Improvement of the sampling efficiency with RAQ** While the accelerating diffusion models quantized with irregular bit widths (e.g., 6bits) on a GPU presents challenges, the utilization of specialized hardware, like bit-scalable accelerators, could offer a promising solution for processing these models [1]. These accelerators are purpose-built to harness the benefits of quantization on a bit-by-bit basis, resulting in a nearly linear improvement in computing efficiency, encompassing both latency and energy consumption, with decreasing bit width. We introduce the concept of Bit Operations (BOPs), where BOPs are calculated as the product of OPs with weight bitwidth and activation bitwidth. This metric allows us to estimate the performance gain achievable through the RAQ method. In the case of LSUN-Churches, the full-precision baseline and Q-diffusion necessitates 4285.2 and 148.6 TBOPs (TBOPs: Tera BOPs) for a single image generation respectively, while the proposed RAQ method only requires 108.8 TBOPs. This indicates that with the utilization of specialized accelerators, the implementation of the RAQ approach could potentially lead to more than 39.4 times speedup and energy savings compared to the full-precision baseline, while Q-diffusion can achieve 28.8 times improvement compared to the full-precision baseline. Table A-1. LSUN-Churches (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-8 |32/32 | 4.09 | 4285.2 | | Q-diffusion | 4/8 | 4.45 | 148.6 | | RAQ | 4/6 | 4.64 | 108.8 | Table A-2.LSUN-Bedrooms (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-4 |32/32 | 2.96 | 20725.8 | | Q-diffusion |4/8 | 4.17 | 681.2 | | RAQ |4/6 | 3.99 | 504.8 | [1] Fu, Yonggan, et al. "2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency." MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. The response has addressed my questions and hence I would keep my current rating.
Summary: This paper presents robustness-aware quantization (RAQ), a novel strategy to use mixed precisions for activations when quantizing diffusion models. The authors found that inaccurate computation during the early stages of the reverse diffusion process has minimal impact on the quality of generated images, and propose to use low-bit activations for the early reverse diffusion process while maintaining high-bit activations for the later stages. Experiments have been conducted for both unconditional and conditional generation using latent diffusion and stable diffusion on various datasets. Strengths: - The paper is well-structured and presents a clear motivation for leveraging the robustness of early-stage diffusion models to use lower-bit activations at those time steps to further improve the computation efficiency. - Experimental results show that the proposed method can use lower precisions for early-stage computation without sacrificing the quality of the generated images. - The experiments with stable diffusion indicate the effectiveness of the proposed methods on text-to-image applications. Weaknesses: My biggest concern with the proposed RAQ approach is its practicality. The method suggests using low-bit activations for the early denoising process and high-bit activations for the later stages. However, the paper does not provide sufficient arguments on how this varying precision can be efficiently implemented and how much additional benefits it can bring compared to the simple W4A8 cases. In real-world applications, changing activation precisions could introduce complexities in designing and implementing corresponding kernels for different stages of the process, as the weight precisions need to be always upcasted to the activation precisions when performing the compute on conventional GPUs (e.g. the compute will always be WyAy for WxAy precisions, where x=4 and y>=4 for the settings discussed in the paper). Consequently, this could limit the practical utility and impact of the RAQ approach. An analysis of the theoretical speed up or memory saving should be done to show that changing activation precisions for early stages can indeed bring substantial improvements in compute efficiency (so the extra efforts for kernels implementation can be justified), and providing some additional simple experimental results will be preferred. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How is $Bit_{act}$ updated in Algorithm 1 with respect to t? It seems like the intervals are always multiples of 0.05T. Was $Bit_{act}$ only updated every 0.05T? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comment. We will now discuss 1) the practicality of the proposed RAQ, and 2) the granularity of Bit_act update. **Response ZFyZ-1: The practicality of the proposed RAQ.** As the reviewer rightly pointed out, we agree that accelerating diffusion models quantized with irregular bit widths on a GPU poses a challenge in terms of performance improvement in real world scenarios. Nonetheless, we believe that the proposed RAQ provides a meaningful direction for advancing the acceleration of diffusion models. Let us discuss more in detail below. First, while we agree that accelerating diffusion models quantized with irregular bit widths (e.g., 6 bits) on a GPU is indeed challenging due to the lack of corresponding arithmetic units, we would like to emphasize that our main contribution is to show a direction that we can reduce bit resolution for a part of parameters, and our approach is not fundamentally limited to a certain bit resolution. Hence, if other works that can further reduce the overall bit resolution of the network are developed independent of our scheme, then it can be combined with our proposed scheme, and there is a chance that portion of the 4-bit parameters in our scheme can be increased to see realistic performance benefits. For example, our current approach involves a basic min/max-based quantization mechanism for activation quantization, leading to an 8-bit quantization for the baseline fixed bitwidth case. Then, the allocation of activation bits using the RAQ method is distributed as (4b, 5b, 6b, 7b, 8b) = (20%, 20%, 10%, 40%, 10%) for LSUN-churches (Table 1). However, in the scenario where an advanced quantization mechanism enables 6-bit activation quantization even for the baseline fixed bitwidth case, there exists the possibility of increasing the proportion of 4-bit activations with the proposed RAQ, such as (4b, 5b, 6b) = (50%, 30%, 20%). Second, while accelerating diffusion models quantized with irregular bit widths (e.g., 6 bits) on a GPU is indeed challenging, the utilization of specialized hardware, like bit-scalable accelerators, could offer a promising solution for processing these models [1]. These accelerators are purpose-built to harness the benefits of quantization on a bit-by-bit basis, resulting in a nearly linear improvement in computing efficiency, encompassing both latency and energy consumption, with decreasing bit width. We introduce the concept of Bit Operations (BOPs), where BOPs are calculated as the product of OPs with weight bitwidth and activation bitwidth. This metric allows us to estimate the performance gain achievable through the RAQ method. In the case of LSUN-Churches, the full-precision baseline and Q-diffusion necessitates 4285.2 and 148.6 TBOPs (TBOPs: Tera BOPs) for a single image generation respectively, while the proposed RAQ method only requires 108.8 TBOPs. This indicates that with the utilization of specialized accelerators, the implementation of the RAQ approach could potentially lead to more than 39.4 times speedup and energy savings compared to the full-precision baseline, while Q-diffusion can achieve 28.8 times improvement compared to the full-precision baseline. Table A-1. LSUN-Churches (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-8 |32/32 | 4.09 | 4285.2 | | Q-diffusion | 4/8 | 4.45 | 148.6 | | RAQ | 4/6 | 4.64 | 108.8 | Table A-2.LSUN-Bedrooms (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-4 |32/32 | 2.96 | 20725.8 | | Q-diffusion |4/8 | 4.17 | 681.2 | | RAQ |4/6 | 3.99 | 504.8 | [1] Fu, Yonggan, et al. "2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency." MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. 2021. **Response ZFyZ-2: The granularity of Bit_act update.** In the context of the RAQ method outlined in Algorithm 1, choosing a finer granularity for Bit_act update necessitates a larger number of sampled images for the optimization process. Meanwhile, our investigation revealed that consecutive timesteps within a 0.05T range exhibit similar sensitivity to noise injection. Therefore, our experimental configuration concentrated on optimizing the activation bitwidth with intervals of 0.05T for the Bit_act update. This approach ensures effective optimization while managing computational demands. Thanks again for the comments. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Dear Authors, Thank you for the comprehensive rebuttal and the clarifications provided. The discussion of bit-scalable accelerators and some potential implications of the RAQ method resolve some of my concerns. I still incline to think the application scenarios of RAQ in real life are relatively limited (echoing with Reviewer eumA), but after careful consideration, I think this work demonstrates an intricate property of the early-stage robustness in diffusion models and provides empirical validation of how to utilize this property to benefit quantization, which is indeed novel and could inspire future research in the NeurIPS community. Thus, I decided to raise my rating slightly. Best regards, Reviewer ZFyZ
Summary: The author initially notes that errors in the early stages of the reverse diffusion process result in minimal disturbance to the final generated image. As a solution, they suggest employing low-bit activations for the initial reverse diffusion process while preserving high-bit activations for the subsequent stages, in conjunction with PTQ. Strengths: - The idea is clear and easy to understand - The proposed RAQ method outperforms the other methods such as Q-diffusion Weaknesses: - Could you the authors explain how is the entropy calculated and why higher randomness in the pixel values will cause the images blurrier? - Cpmparison to other methods. The authors mentioned two PTQ methods PTQ4DM and Q-diffusion, but only provide quantitative and qualitative comparison to baseline and Q-diffusion. - In section 3.2, it seems obvious that add the same amound of noise to a noisier image will have less influence than to a less noiser image?   - And the authors did not explain why in figure 3, the performance on two different dataset are so different. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Why is the images in Figure (a) is mirrored? - What does the term W4A6/8 mean in Figure 7 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comment. We will now discuss 1) relationship between entropy and image clarity, 2) exclusion of comparison with PTQ4DM, 3) the effects of noise addition on images with varied clarity levels, 4) the performance difference between two datasets in Figure 3, 5) correction of mirrored images in Figure 1(a), and 6) clarification of notation “W4A6/8” in Figure 7 **Response rGNp-1: Relationship between entropy and image clarity** Entropy quantifies the level of randomness or uncertainty within a dataset, and when applied to image generation, it reflects the diversity or randomness in the pixel values of the generated images.The entropy of the random variable $X$ is defined as following equation [1]: $H(X)=\sum_{x}-p(X)\log p(X)$ Higher entropy indicates a wider range of pixel values, which can introduce more noise and randomness to the images. This noise can lead to fluctuations in pixel values that do not align with the actual image structure, ultimately causing a decrease in image sharpness and clarity. In contrast, lower entropy suggests more structured and consistent pixel values, contributing to clearer and sharper images. For detailed information regarding the calculation of entropy within the context of our study, please refer to Supplementary Material A.2. [1] Shannon, Claude Elwood. "A mathematical theory of communication." The Bell system technical journal 27.3 (1948): 379-423. **Response rGNp-2: Exclusion of comparison with PTQ4DM** There are two key reasons behind our decision not to include a comparison with PTQ4DM in this study. Firstly, PTQ4DM primarily concentrates on quantizing both weights and activations to 8 bits. In contrast, the Q-diffusion approach achieves a more advanced level of quantization by reducing the bitwidth of weights to 4 bits while retaining activations at 8 bits. This advancement positions Q-diffusion as a state-of-the-art quantization technique for diffusion models. Secondly, PTQ4DM focuses on low-resolution image generation using DDIM-based models. The reported FID results in their work correspond to image resolutions such as 32x32 for CIFAR-10 and 64x64 for ImageNet. On the other hand, our paper places its focus on generating high-resolution images, specifically at a resolution of 256x256 for LSUN images and 512x512 for Stable Diffusion. The significant discrepancy in image resolution creates challenges in directly comparing our method and PTQ4DM due to the absence of compatible data points. **Response rGNp-3: The effects of noise addition on images with varied clarity levels** As the reviewer correctly indicated, the analysis presented in Section 3 leads us to an additional conclusion: the influence of adding noise to already noisy images is relatively less significant compared to adding noise to images that possess greater clarity. These conclusions stem from two main insights from our analysis. Firstly, images generated during the initial phases of the diffusion process show heightened noise levels compared to those generated later. Secondly, the early-stage diffusion process exhibits a higher resilience against the introduction of noise. Hence, based on these insights, we can reasonably infer that a more aggressive quantization approach could be applied during the early stages of the process, as more aggressive quantization causes higher quantization noise. This strategic choice aligns with the inherent strengths of the early diffusion steps and their ability to accommodate higher quantization noise resulting from more aggressive quantization. **Response rGNp-4: The performance difference between two datasets in Figure 3** We believe that the variation in performance between the two datasets is attributed to the significant difference in their image resolutions. Specifically, the CIFAR-10 dataset comprises images with a resolution of 32x32, while the LSUN-churches dataset contains images at a higher resolution of 256x256. The increased resolution of LSUN-churches images introduces finer details into the generation process, making the image generation process more susceptible to noise. However, it's noteworthy that both datasets exhibit a similar early-stage resilience to noise injection, as demonstrated in our analyses. **Response rGNp-5: Correction of mirrored images in Figure 1(a)** We appreciate your observation regarding the mirrored images in Figure 1(a). It appears that the mirroring occurred unexpectedly during the figure preparation process. We will correct the mistake and update the images in the final version of the paper. Thank you for bringing this to our attention. **Response rGNp-6: Clarification of notation “W4A6/8” in Figure 7** We acknowledge the concern you raised regarding the notation "W4A6/8" in Figure 7. We aimed to convey that the activation bits of the stable diffusion model were combined with both 6 bits and 8 bits precision. However, we recognize that this notation might lead to confusion among readers. To address this issue, we will update the explanation of the "W4A6/8" notation in the caption of the Figure 7 as follows: “Here, W4 denotes adopting 4-bit weights, A$n$ denotes $n$-bit activations, and A$n/m$ indicates adopting $n$-bit activations in the early stage of the diffusion stages, while the later stage adopts $m$-bit activations”. We appreciate your valuable feedback, and thank you for bringing this to our attention. Thanks again for the comments.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this submission, the authors propose a novel approach to speed up the noise estimation network by leveraging the robustness of early-stage diffusion models. Specifically, they present an algorithm to modify the quantization bit width according to the diffusion step. The proposed method shows positive results in reducing activation bits below 8 bits. Strengths: • The writing of this manuscript is easy to follow, and the illustrations are clear. • This work is well-motivated. Based on the analysis, the authors provide insights on the different roles that different diffusion steps play and show the room to improve the PTQ process by treating early and later steps differently. • The experiments show positive results of the proposed method. Weaknesses: • The real-world benefits of reducing activation bits. With advanced samplers, the sampling steps of diffusion models are significantly reduced, e.g., to 50 steps or lower. Thus, the gain achieved through low bit width calculation in the early steps may be marginal in real-world evaluation. On the other hand, bit width is usually a power of two. To my knowledge, some execution cores are designed to process 8-bit-only or 4-bit-only data. Irregular bit widths like 6 bits are treated as standard bit widths by padding zeros. Thus, the benefits of reducing to irregular bit widths (e.g., 6 bits) instead of standard bit widths (e.g., 4 bits) are questionable from the perspective of hardware. The authors are encouraged to provide real-world evidence of the benefits of RTQ or a discussion of the above concerns. • The choice of FID threshold. In the RTQ algorithm, the choice of FID threshold is critical since it determines the final bit width dictionary and thus the quantization gain. How do you set this hyperparameter for a new dataset? Technical Quality: 3 good Clarity: 3 good Questions for Authors: • How does RTQ perform in accelerating diffusion models in real-world scenarios? • How do you set a reliable FID threshold for the RTQ algorithm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The choice of FID threshold in the RTQ algorithm is unclear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response eumA-1: The influence of RAQ on accelerating diffusion models** As the reviewer rightly pointed out, advanced samplers have been recently presented to reduce the sampling steps, and we agree that accelerating diffusion models quantized with irregular bit widths on a GPU poses a challenge in terms of performance improvement in real world scenarios. Nonetheless, we believe that the proposed RAQ provides a meaningful direction for advancing the acceleration of diffusion models. Let us discuss more in detail below. First, the approaches of reducing sampling steps and the proposed RAQ are orthogonal methodologies for accelerating diffusion models unless an extreme reduction in sampling steps is undertaken, such as employing a single-step sampling strategy. Recent works for reducing sampling steps typically fine-tune diffusion models so that multiple sequential sampling actions can be encompassed by fewer sampling steps. Even in such scenarios, the fundamental property of diffusion models, where Gaussian noise progressively transforms into desired images across sampling steps, remains applicable. This intrinsic characteristic enables each step to display unique responses to quantization, thereby making our concept applicable even in situations with diminished sampling steps. Second, while we agree that accelerating diffusion models quantized with irregular bit widths (e.g., 6 bits) on a GPU is indeed challenging due to the lack of corresponding arithmetic units, we would like to emphasize that our main contribution is to show a direction that we can reduce bit resolution for a part of parameters, and our approach is not fundamentally limited to a certain bit resolution. Hence, if other works that can further reduce the overall bit resolution of the network are developed independent of our scheme, then it can be combined with our proposed scheme, and there is a chance that portion of the 4-bit parameters in our scheme can be increased to see realistic performance benefits. For example, our current approach involves a basic min/max-based quantization mechanism for activation quantization, leading to an 8-bit quantization for the baseline fixed bitwidth case. Then, the allocation of activation bits using the RAQ method is distributed as (4b, 5b, 6b, 7b, 8b) = (20%, 20%, 10%, 40%, 10%) for LSUN-churches (Table 1). However, in the scenario where an advanced quantization mechanism enables 6-bit activation quantization even for the baseline fixed bitwidth case, there exists the possibility of increasing the proportion of 4-bit activations with the proposed RAQ, such as (4b, 5b, 6b) = (50%, 30%, 20%). Third, while accelerating diffusion models quantized with irregular bit widths (e.g., 6 bits) on a GPU is indeed challenging, the utilization of specialized hardware, like bit-scalable accelerators, could offer a promising solution for processing these models [1]. These accelerators are purpose-built to harness the benefits of quantization on a bit-by-bit basis, resulting in a nearly linear improvement in computing efficiency, encompassing both latency and energy consumption, with decreasing bit width. We introduce the concept of Bit Operations (BOPs), where BOPs are calculated as the product of OPs with weight bitwidth and activation bitwidth. This metric allows us to estimate the performance gain achievable through the RAQ method. In the case of LSUN-Churches, the full-precision baseline and Q-diffusion necessitates 4285.2 and 148.6 TBOPs (TBOPs: Tera BOPs) for a single image generation respectively, while the proposed RAQ method only requires 108.8 TBOPs. This indicates that with the utilization of specialized accelerators, the implementation of the RAQ approach could potentially lead to more than 39.4 times speedup and energy savings compared to the full-precision baseline, while Q-diffusion can achieve 28.8 times improvement compared to the full-precision baseline. Table A-1. LSUN-Churches (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-8 |32/32 | 4.09 | 4285.2 | | Q-diffusion | 4/8 | 4.45 | 148.6 | | RAQ | 4/6 | 4.64 | 108.8 | Table A-2.LSUN-Bedrooms (256x256) generation results | Model |W/A | FID | TBOPs| |---|---|---|---| | LDM-4 |32/32 | 2.96 | 20725.8 | | Q-diffusion |4/8 | 4.17 | 681.2 | | RAQ |4/6 | 3.99 | 504.8 | [1] Fu, Yonggan, et al. "2-in-1 accelerator: Enabling random precision switch for winning both adversarial robustness and efficiency." MICRO 2021. **Response eumA-2: FID threshold setting for the RAQ algorithm** The RAQ algorithm employs the FID of diffusion models without activation quantization as the FID threshold. This aligns with the central objective of the RAQ algorithm, which is to adaptively vary the activation quantization bitwidth across the different sampling steps while preserving the quality of image sampling. For example, when applying the RAQ algorithm to the LSUN-Churches dataset, we initially compute the FID of the LSUN-Churches dataset without activation quantization while generating 5,000 samples. This approach ensures that the optimization of activation quantization maintains FID at an acceptable level. Consequently, as illustrated in Table 1, the proposed RAQ technique achieves an effective activation bitwidth of 6 without compromising FID. On the other hand, we can achieve lower activation bitwidth by slightly compromising FID. In this case, it becomes necessary to make adjustments to the FID threshold. The amount of the adjustment is dependent on the predetermined level of FID tolerance. For example, if a 10% increase in FID is acceptable, then the FID threshold is increased by 10%. For example, in the context of LSUN-church cases, we achieved an effective activation bit count of 5.60, resulting in an FID of 5.12. This FID value represents an approximate 10% increase compared to that of the full precision activation bit configuration, where the FID is 4.45 (Table 1). Thanks for the comments. --- Rebuttal Comment 1.1: Comment: After reading author's rebuttal and other reviews, I have decided to keep my rating. I am satisfied with author's explanation on the choice of FID, which should be clarified in future versions. However, the practical concerns regarding sampling steps and irregular bit width remain as crucial weaknesses of this submission. I know RAQ is applicable to advanced samplers. My point is the benefit in running time is marginal when there are very few steps. On the other hand, theoretical measurements like FLOPs, MACs or BOP that the authors introduced, make sense when the underlying execution kernels are consistent. Most existing execution kernels are not precision-scalable and quantization algorithm should be evaluated in that scenario. In summary, I believe the authors propose a simple and effective algorithm in theory, but with limited evaluations in terms of practical value.
null
null
null
null
null
null
A Closer Look at the Robustness of Contrastive Language-Image Pre-Training (CLIP)
Accept (poster)
Summary: This paper performs a comprehensive study of various CLIP models on robustness to different visual factors, out-of-distribution detection, and calibrated uncertainty estimations. A total number of 53 CLIP models trained on different training sources and sizes, and different architectures, with additionally 32 CLIP models fine-tuned on ImageNet are studied. Strengths: [Originality] To the best of my knowledge, no previous studies have done such experiments on the 3 aspects to study CLIP models, especially considering various CLIP models trained on different datasets. Some of the observations made in this paper are new and complementary to the previous studies. For example, CLIP models are not robust in all aspects - they are less robust than models trained on ImageNet in a supervised way when poses are changed. Therefore, the findings are novel. [Significance] Although most conclusions that can be drawn from the experiments are already known, e.g., CLIP models are more robust than supervised models, some of them are less well-known and may be valuable to the community. For example, CLIP models trained on WIT perform better than those trained on LAION on OOD detection. [Quality & Clarity] The quality and presentation of this paper are good. It is easy to understand and follow. Weaknesses: Some of the discussion on the experiment observations might need more support. 1. Line 177: "The shape bias of CLIP may be attributed to its objective, which involves training the model to associate text and image pairs": in Figure 2, it seems that fine-tuning CLIP models on ImageNet (with supervised objective?) decreases the shape bias. However, it is also possible that the data source (ImageNet) could be the reason. Is it possible to decouple this? Say fine-tune CLIP with the contrastive objective on ImageNet and see if the shape bias stays the same. 2. Line 246: "We notice that CLIP models trained on LAION-80M dataset exhibit lower calibration performance when compared to standard models." Is this fair to compare as CLIP models trained on LAION-80M generally have lower accuracy? Another minor concern I have is that some results from previous studies are not clearly discussed. For example, in ImageNet-X, it is already observed that "color-jitter augmentation improves robustness to color and brightness, but hurts robustness to pose." Since color-jittering is widely used in the CLIP training, this should be discussed in the paper. [1] Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdzal, David Lopez-Paz, and Mark Ibrahim. Imagenet-x: Understanding model mistakes with factor of variation annotations. In International Conference on Learning Representations, 2022. [Minor] Line 94: ImageNet 32 fine-tuned CLIP models -> 32 ImageNet fine-tuned CLIP models Line 141: Small -> Smaller Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have the following questions that would help me better understand the paper, and I would appreciate the authors' reply to them: 1. Is there any insight on why WIT differs from LAION when serving as the CLIP training set, since I thought both of them were deemed to be a comprehensive snippet of the web image-text pairs? 2. What about the other self-supervised models like MAE when compared with CLIP in terms of robustness? This also helps demystify the factors of datasets and objectives. 3. ImageNet-1k contains 1.3 million images, and ImageNet-21k contains > 20 million images, which are not far away from the smallest LAION dataset considered in the paper. Is it safe to say that when the dataset size is similar, the supervised objective is still better in terms of calibrated uncertainty estimations? How about robustness and OOD detection? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the broader impacts in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: Figure 2, fine-tuning CLIPs on ImageNet (with supervised objective?) decreases shape bias … data source (ImageNet) could be the reason … fine-tune CLIP with contrastive objective to see if shape bias stays the same. Insightful suggestion. All fine-tuned CLIP models in Fig. 2 use supervised cross-entropy loss. To decouple the effects of contrastive learning and data source, we employed zero-shot CLIP with ViT-B-32 and fine-tuned it on ImageNet by two methods: standard cross-entropy and contrastive loss [a]. The shape bias extents are measured as 0.58, 0.40, and 0.56 for zero-shot CLIP, standard fine-tuned CLIP, and contrastive-loss fine-tuned CLIP, respectively. This indicates ImageNet might not be the primary cause of the shape-bias decrease. Also, we maintain speculation that associating image embeddings with text embeddings could potentially help learn shape-biased models. We will include the above discussion in the revised version. [a] Finetune like you pretrain: Improved finetuning of zero-shot vision models. CVPR’23 >Q2. Is it fair to compare CLIP trained on LAION-80M and standard models as the former ones generally have lower accuracy? Thanks for raising this discussion. First, it is a common practice to compare uncertainty estimation performance between models with varying accuracy levels, as exemplified by references [b,c,d] and many others. Second, in the main paper, CLIP models trained on LAION-80M are used to demonstrate that CLIP models do not always exhibit superior calibration compared to other ImageNet models, contrary to existing observations. We further clarify that, at comparable accuracy levels, CLIP models trained on LAION-80M are observed to have higher calibration error (ECE). Last, we firmly believe in the necessity of a comprehensive evaluation of CLIP models' robustness, driving the primary motivation behind our study. When measuring CLIP models' robustness, we advocate for considering more perspectives alongside the commonly used classification accuracy We will revise the Line 246-247 to make it clear. [b] On calibration of modern neural networks, ICML'17 [c] Improving model calibration with accuracy versus uncertainty optimization, NeurIPS'20 [d] Revisiting the Calibration of Modern Neural Networks. NeurIPS'21 >Q3. Some results from previous studies are not clearly discussed … “color-jitter augmentation improves robustness to color and brightness but hurts robustness to pose". Insightful point. We agree that data augmentation used in CLIP training is crucial for their factor-level robustness. Specifically, data augmentations can improve robustness to related factors, but with spill-over effects to unrelated factors. As the reviewer points out, colour-jittering serves to improve robustness to color and brightness variations while influencing object pose. Similarly, scale-based augmentation could facilitate the learning of scale-invariant features. It would be interesting to use our benchmark to study the impact of data augmentations and other dataset curation techniques (e.g., filtering). We will include the above discussion in Section 4.1. >Q4: Any insight on WIT differs from LAION Good suggestion. We think the performance difference between WIT and LAION datasets could be attributed to two main factors: First, dissimilarity in data sources used for training - Common Crawl for LAION and unknown data sources for OpenAI's training set. As pointed out by [e], Common Crawl may be a noisier data source (weaker connection between images and associated text) or contain less diverse images. Second, filtering process in the LAION dataset curation: LAION employed a small-scale ViT-B/32 model for filtering, which may result in a substantial amount of poorly matching image-text pairs. Moreover, recent work [f] suggests that more appropriate curations on LAION give the competitive performance of CLIP models compared to those trained on WIT. The above further highlights the importance of dataset curation for learning robust CLIP models. We will include the above discussion. [e] LAION-5B: An open large-scale dataset for training next-generation image-text models. NeurIPS'22 [f] DataComp: In search of the next generation of multimodal datasets, Arxiv 2023 >Q5. Other self-supervised models like MAE … demystify the factors of datasets and objectives Thanks for this valuable suggestion. During rebuttal, we evaluated three MAE models on three aspects in Figure R-3 in the uploaded PDF. Here are our observations: 1) Visual factor-level robustness: three MAE models lie on the area of models pre-trained on more data; 2) OOD detection: MAE models also lie in the area with other models pre-trained on more data. Some zero-shot CLIP models achieve higher performance than MAE models; 3) Calibration: before TS, MAE models have higher uncertainty estimation performance than CLIP trained on LAION while lower than CLIP trained on WIT. Post TS, CLIP models become better than MAE. >Q6: Is it safe to say that when the dataset size is similar, the supervised objective is still better in calibration, robustness, and OOD detection Interesting point. According to the discussion in [g], when the dataset size is small (e.g., 15M), CLIP models’ classification accuracy on IN-1K would drop. In contrast, under the same dataset size, supervised objectives achieve high-accuracy models. When considering a similar training set scale, we anticipate the following phenomena: 1) Factor-level Robustness: CLIP exhibits lower accuracy on each factor while maintaining the same relative robustness trend as models trained on larger datasets. 2) OOD Detection: CLIP's performance weakens compared to supervised models. 3) Calibration: Given the impact of training set quantity on CLIP, we expect it to achieve less effective calibration results than supervised models. [g] Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP --- Rebuttal Comment 1.1: Comment: Thank you for your effort on the thorough response. I think the additional experiments on ImageNet help clarify the coupled objective and training data factors. The answers also mitigate my other concerns. Therefore, I would like to keep my original rating. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer A6cY, Thank you for your valuable and constructive suggestions! We are happy to hear that your concerns have been addressed. Kind Regards, Authors
Summary: This paper studies and compares CLIP and CLIP-FT to standard models on a range of different tasks including OOD robustness, OOD detection, and model calibration. The paper constitutes a meta-analysis across different model architectures / training datasets / training algorithms or loss functions. The authors claim: #### Robustness CLIP sometimes outperforms other models on certain visual factor variations, but sometimes underperforms them. CLIP models are more shape biased (authors claim due to VLM training), resolution increase decreases shape bias. Fine-tuning makes CLIP behavior more similar to other image models. #### OOD Detection CLIP generally performs better than other models in OOD detection; the relationship between IID accuracy and OOD detection performance largely follows accuracy on the line. Fine-tuning negatively affects performance. #### Model Calibration CLIP isn’t significantly more calibrated than other models, data distribution affects calibration. Temperature scaling makes CLIP more calibrated than other models, and removes dependence on data distribution. Temperature scaling is apparently more important for CLIP. When temperature scaled CLIP is better OOD calibrated than other models. #### Test Time Prompts Using more prompts generally helps across tasks, except in the case of some visual factor variations where it doesn’t really make a difference Strengths: - The introduction is well written, motivates the paper and gives a clear overview of the claims. - Well written, easy to follow - Objective tone/analysis and solid experimental design - Some nice points/findings dispersed throughout the paper Weaknesses: - No clear central argument or claim, paper seems more like a pastiche of different experiments rather than a focused analysis. There is little interpretation of the experimental findings. I have written very detailed questions considering the OOD detection results, but similar questions apply to all sections. - It looks to me that the paper tries to do too much in one paper and ends up being imprecise / too shallow on interpreting the results. For example, [2] focuses solely on how the data distribution affects robustness. Here, the authors observe this finding and only note that “The above observations highlight the importance of the choice of training source in determining not only the overall accuracy but also the factor-level behaviors of CLIP models. This suggests that visual factor-level robustness should be considered when designing the training source for CLIP models.” This is a “political” answer which does not provide concrete action items for future researchers. Some claims/analyses are probably wrong: * “Temperature scaling reveals a consistent trend of CLIP models, and they still lie on a distinct trend from other models.” This doesn’t seem to be the case ALL of the time. SSL models look pretty similar to CLIP on the “NLL (Temp-scaled)” and “ECE (Temp-scaled)” Imagenet-A plots. Amount of difference seems to be dataset specific. In fact, I only see a significant effect in ImageNet-A, ECE vs ECE-temp scaled, where the temperature scaling seems to affect all models. * ““This observation indicates that unlike robustness and out-of-distribution detection, the calibration of CLIP models is influenced by both training data distribution and quantity.” I don’t see how this follows from your data - aren’t robustness and OOD detection also influenced by data distribution? * The CLIP training dataset likely overlaps with the datasets used for OOD detection which makes it unclear how meaningful the claims are in regards to CLIP’s good OOD detection performance. This issue should at least be discussed. * OOD detection evaluations are missing important baselines (see details below). - The paper lacks novelty: The influence of training data / fine-tuning affecting CLIP performance have been studied in detail before [1,2,3]. This paper evaluates many different models and is thus a meta-study of the previous findings. In that case, the literature review should be expanded, and the paper must be better positioned, e.g. in “in [3], the authors investigate the influence of fine-tuning on OOD robustness. We here investigate whether their claims hold across a broader range of models'' or something like this. Though [3] already provides a thorough and careful empirical investigation across many different models, and I am not sure whether this paper offers much beyond the results presented in [3]. - The graphs were sometimes a bit hard to read/interpret - there were a lot of them and usually the conclusion was different/unique depending on the plot. - Some kind of unifying principle might be nice. ### Minor: - Please be more specific about results/claims in the abstract. - 3.1 - Would be nice to explain/list model choices at some point, i.e. refer to Supplement A.2. here - 3.2 (Robustness) - more explanation of why you chose 10 of the 16 would be nice (robustness) - Line 40, style / grammar: “our study further study .. “ - Line 42: “training distributions”, it should be distribution - Line 50, remove the extra space before the full stop - Line 280: “highlighting their potential for the robust and reliable applications” → “highlighting their potential for robust and reliable applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the central claim/argument? * Maybe pivot towards study similar to [2] in the context of OOD detection and model calibration? That is: Be more specific on what causes the observed OOD detection results. * What about the data distribution matters for OOD detection and model calibration? In general, I would appreciate some deeper insights, I don’t feel like I learned all that much from this paper. ### Robustness results (Section 4) Lines 138-144: The authors observe that CLIP models perform better on certain visual factors and worse on others. Some interpretation of the results is necessary to make the observations valuable to the research community. For example, the authors find that fine-tuning on ImageNet changes the robustness properties of CLIP models, but offer no explanation / interpretation of the results. ### OOD detection results (Section 5): Line 209: “Upon closer examination of the training distribution, we have observed that the correlation trend between ID accuracy and OOD detection performance is largely dependent on the training source.” -> It is not clear to me how sensible the OOD detection task is for CLIP models. While LAION may not contain e.g. all of the ImageNet training images, doing zero-shot inference is possible because CLIP has seen similar images during training, thus, “zero-shot” is ultimately a misleading term which becomes an issue once OOD detection is considered. It is not clear to me that the datasets used for OOD detection here, namely iNaturalist [52], SUN [53], PLACES [54], TEXTURE [55] and ImageNet-O [7], are actually not part of LAION in the first place, making measuring performance on those datasets an ID task, rather than an OOD task. Thus, I find it unsurprising that the training data distribution matters a lot for this particular task. Further, comparing the performance of models trained on ImageNet which has no intersection with ImageNet-O to models trained on LAION which may have a high intersection with ImageNet-O seems an unfair comparison to me. Could the authors please comment on this issue, in particular, on the sensibility of doing OOD detection for CLIP models trained on e.g. LAION? Continuing on this point: Line 212: “Moreover, with same ID accuracy, CLIP models trained on WIT exhibit superior OOD detection performance compared to their counterparts trained on LAION on three OOD scenarios. This further indicates the importance of training source selection for CLIP. When developing dataset curation methods, it is valuable to investigate the influence of training sources on OOD detection performance.” I would like to see a more nuanced and detailed discussion on this finding. Is it maybe that WIT just has a larger overlap with the OOD detection datasets? What is the concrete suggestion here? Line 219: “Some CLIP-FT models even achieve worse OOD detection performance than Zero-shot CLIP models.” This is in line with the points above and what we may be seeing here is that with fine-tuning, the model weights move towards being compatible with ImageNet-1K and further away from the original training distribution which likely overlaps with OOD detection test sets. Due to all of the points raised above, I find the sentence in Conclusion, line 303: “Furthermore, while maintaining comparable accuracy on in-distribution dataset, CLIP models tend to exhibit higher performance in OOD detection.” to be misleading. The recent paper [4] (published after the NeurIPS submission deadline) makes the very similar observation that the CLIP training dataset has a large influence on zero-shot OOD detection, and also remarks on the effects of the fine-tuning procedure. Comparing how the conclusions made in this paper align with theirs would be very helpful. An evaluation on their dataset would also be interesting to see, since they show that Places, Texture and ImageNet-O have severe issues with ID data. They seem to contradict the notion that CLIP zero-shot is well suited for OOD detection. I see no reason not to compare fine-tuned CLIP models with those fine-tuned on ImageNet 21K, which are very commonly used. Further, the evaluation of OOD detectors should include strong baseline models like the Mahalanobis detector, compared to which the CLIP based models clearly fall behind according to the evaluations in [4]. Not including the mentioned important baselines (which are also included in [5]) might be the main explanations why the evaluations in Figure 3 (and in the previous papers claiming zero-shot CLIP with MCM being a strong OOD detector) suggest such a positive image of the CLIP based models. More centralized and focused claims/analyses would raise my score. #### References: * [1] Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP * [2] Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP) * [3] The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning * [4] Bitterwolf et al. ICML 2023: “In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation” * [5] Yang et al. NeurIPS 2022: “OpenOOD: Benchmarking Generalized Out-of-Distribution Detection” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations were not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: No clear central argument ... unifying principles Thanks. Our central argument is the necessity of a comprehensive evaluation of CLIP's robustness. In contrast to current approaches focused on classification accuracy, we propose integrating three new safety-driven objectives: factor-level robustness, OOD detection, and uncertainty calibration. This enables a thorough assessment of critical factors (e.g., training source and prompt) on CLIP's behaviours. Our analysis highlights the significance of training sources on the three objectives. >Q2-1: Interpretation on robustness … fine-tuning changes robustness Thanks. As pointed out by Reviewer A6cY (Q1), CLIP uses color-jitter, which could improve its robustness to color and brightness but hurt Pose. Also, fine-tuning CLIP with contrastive loss maintains shape-bias in prediction. Moreover, as discussed in [a], standard fine-tuning introduces spurious correlation for CLIP and thus hurts robustness. [a] Masked Images Are Counterfactual Samples for Robust Fine-tuning >Q2-2: Maybe pivot towards [2] for OOD detection and calibration Thanks. Following [2], we indeed investigated crucial factors for CLIP, including training source, dataset quantity, test-time prompt, contrastive loss, and architecture. Through this analysis, we emphasize the significance of data distribution: 1) data distribution affects the accuracy trend of zero-shot CLIP in OOD detection and calibration; 2) CLIP's calibration is further influenced by dataset quantity. Additionally, we stress the role of prompt learning, as different prompts can alter the performance trend on both tasks. >Q3-1: Some claims probably wrong: 'TS reveals a consistent trend of CLIP … distinct from other models' We clarify that Fig.4 gives two observations: 1) Before TS, CLIP models from different sources and subsets exhibit distinct trends; 2) Post TS, CLIP models exhibit a similar trend, diverging from other models: with comparable classification accuracy, they demonstrate lower calibration errors. >Q3-2: 'Unlike robustness and OOD detection, calibration is influenced by both training distribution and quantity' We clarify that CLIP model's factor-level robustness and OOD detection are affected by training distribution, while calibration is influenced by both training distribution and quantity. >Q4: Lack novelty … training data/fine-tuning influence have been studied in [1-3] … a meta-study. Our work significantly differs from [1-3], which concentrates solely on classification robustness under distribution shifts. In contrast, our study investigates CLIP models' robustness from three new perspectives: factor-level robustness, OOD detection, and calibration. Furthermore, [1-3] do not analyze how training data and fine-tuning impact CLIP models across these aspects. We give such analysis and comprehensively examine training distribution, quantity, test-time prompt, and fine-tuning schemes, revealing novel observations into CLIP models' behaviour. >Thanks for raising the insightful discussion on CLIP's sensibility for OOD detection. We think potential data overlap does not compromise CLIP's promising results and will not introduce sensibility. Please see the below: >Q5-1: CLIP may have seen similar images during training … “zero-shot” is misleading We follow the notion of “zero-shot” OOD detection (Ming et, al. 2022): CLIP allows users to redefine ID/OOD classes flexibly without requiring detector retraining. >Q5-2: Not clear OOD datasets are not part of LAION ... unfair comparison We hold the same opinion with the authors of LAION: ‘we do not consider potential test set overlap to be a serious threat for the validity of results’. Dataset overlap may arise if OOD datasets are also included in Common Crawl. In classification, OpenAI found only a few examples of substantial performance differences due to data overlap. Similarly, as discussed in [4] for OOD, the overlap between IN-21K with NINCO does not cause substantially different changes between models with and without pretraining on IN-21K. This further indicates potential data overlap will not introduce sensibility for CLIP. >Q5-3: Maybe WIT has a larger overlap than LAION with OOD datasets The performance gap in OOD (and classification) could be attributed to training source quality and filtering process. Please see Q2/Reviewer A6cY. >Q5-4: Fine-tuning shifts weights towards IN-1K and away from original distribution that likely overlaps with OOD datasets Fine-tuning CLIP on IN-IK could learn unsuitable features for OOD detection due to spurious correlation [a]. Further, using a broader set of data/classes from IN-21K may potentially mitigate this issue. >Thanks for sharing post-submission work [4]. The discussion has strengthened the OOD detection aspect of our study: >Q6-1: [4] observes CLIP training dataset has a large effect on zero-shot OOD detection and remarks on fine-tuning effect ... fine-tuned CLIP on IN-21K After careful check, [4] only included two zero-shot CLIP models and did not study training set impact. IN-12K is a subset of IN-21K with excluded classes having few samples, resulting in approximately 85% overlap [4]. IN-12K is widely used for CLIP fine-tuning. Moreover, [4] reports fine-tuning on IN-21K helps OOD detection, and we observe fine-tuning CLIP on IN-12K is helpful. We will cite [4] and discuss the mutual finding. >Q6-2: Evaluation on their dataset ... Mahalanobis detector Thanks. We noted that [4] reports "it is difficult for many OOD detectors to improve consistently over MSP", supporting the rationale for using MSP in our study. During rebuttal, we established our benchmark on NINCO as per [4], using Mahalanobis and MSP. Please refer to Figure R-4 in the uploaded PDF. Our findings in Sec. 5 align consistently with NINCO results. Further, we observe promising zero-shot CLIP detection accuracy, compared to other ImageNet models with Mahalanobis detector. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Dear authors, I have read the rebuttal but I find that most of my questions have not been addressed. 1) No central argument in the paper. Your response was: “In contrast to current approaches focused on classification accuracy, we propose integrating three new safety-driven objectives: factor-level robustness, OOD detection, and uncertainty calibration.” The objectives are not new at all, and studying all of them separately does not provide a unifying objective for the storyline. As stated in my original review, the paper reads as a conglomeration of different experiments without much interpretation of the results. In principle, you could have added more safety-driven objectives, such as e.g. mechanistic interpretability, fairness analyses, adversarial attacks etc., to have even more experimental results without a unifying principle. I think the ideas in this paper are still too scattered for it to be accepted at NeurIPS. 2) “Interpretation on robustness … fine-tuning changes robustness” Here, I stated that the results are very similar to Ref [3] (The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning). I also wrote that the interpretation / analysis of the results is very shallow. The authors responded writing about the effect of Color Jitter in CLIP which does not answer nor concern any of my questions. 3) “TS reveals a consistent trend of CLIP … distinct from other models”. First, it is impossible to compare the metrics with and without temperature scaling because the y-axes are different. The authors claim that CLIP models do not follow the trends of “other models” and that temperature scaling leads to consistent trends in CLIP models. My issues: 1) I find the first statement to not be true because I do not see any trends in “other models” which CLIP models may follow, e.g. ImageNet-S / ImageNet-V2-A (ECE temp scaled). All models are scattered across the plot and I do not see any distinct trends. 2) Models denoted by different icons do cluster together on some datasets, with and without temperature scaling. But in those cases, all model classes seem to be in somewhat different clusters, and I do not see a notable distinction of CLIP vs other models. 4) Lack of novelty. Yes, the authors here study model performance across several axes in contrast to previous work which examined model behavior carefully along one of those axes. But there is still very little analysis and interpretation of the results. What new insights do we get? As stated in my original review, the insights offered by the authors are not actionable or concrete, such as “The above observations highlight the importance of the choice of training source in determining not only the overall accuracy but also the factor-level behaviors of CLIP models. This suggests that visual factor-level robustness should be considered when designing the training source for CLIP models.” The observation that the training data source influences robustness is trivial and well-known. - As stated in my original review, there are papers which analyze one of the aspects studied here in great detail. For example, [2] (Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP)) analyzes the role of the data distribution. The authors here seem to replicate some of the results of [2], e.g. that the training distribution matters. But then, what is the benefit of the current study if it just reproduces some results of more focused studies? Wouldn’t I be better off reading multiple studies which carefully study one aspect of generalization instead of this paper? I think a meta-study needs to have a lot more analysis and interpretation to add value. 5) “We hold the same opinion with the authors of LAION: ‘we do not consider potential test set overlap to be a serious threat for the validity of results’. Please provide a citation for this. Were the LAION authors specifically concerned with OOD detection in that statement? Further, there are issues when evaluating OOD robustness of foundation models, please see Ref [4]. 6) OOD findings on NINCO: Contributions: “With comparable in-distribution accuracy, CLIP models are competitive or better in detecting OOD data than other ImageNet models.” I do not see this effect in the attached Figure R-4. The best results on AUROC are achieved by the ImageNet-21K models (orange circles). a. “Our findings in Sec. 5 align consistently with NINCO results. “ Please be more specific: which NINCO results align with which of your Sec. 5 results? I find this not to be true. The results in Sec. 5 are very different from the NINCO results, see my argument below. b. “Further, we observe promising zero-shot CLIP detection accuracy, compared to other ImageNet models with Mahalanobis detector.” I do not share this observation. The best models in this plot are ImageNet-21K models, no? --- Reply to Comment 1.1.1: Title: Follow-Up Discussion (1/2) Comment: Dear Reviewer jPAm, Thank you for your constructive feedback. Please see the discussion below: > Q1: Further discussion on central argument We call for attention on safety-related objectives beyond classification accuracy alone, when evaluating CLIP’s robustness. Recent findings highlight the pivotal role of training distribution on CLIP’s classification robustness, whereas other factors exhibit limited influence. However, this paper raises the concern that relying solely on training set distribution does not suffice for ensuring complete robustness. We study the three representative objectives (visual-factor level robustness, OOD detection, and calibration). Note that, we do not study them separately but consistently investigate the impact of crucial factors on CLIP’s behaviour on each objective. Following analytical approach outlined in Ref [2], we consider model architecture, training distribution, training set quantity, fine-tuning, contrastive loss and test time prompt. Our experiments emphasize factors like test-time prompts (OOD and calibration) and training set quantity (calibration) remain important considerations for a comprehensive evaluation of CLIP's robustness. Last, we view our work as a starting point to call for attention on safety-related objectives. It would be interesting to include other objectives (e.g., the mentioned fairness). > Q2: “Interpretation on robustness … fine-tuning changes robustness” … results are very similar to Ref [3] First, Ref [3] studies the effect of fine-tuning on effective robustness in overall classification accuracy. In contrast, we consider the visual factor-level robustness as well as texture-shape bias. Ref [3] did not report the observations on this perspective. We will cite [3] and discuss the difference. Second, we observe fine-tuning CLIP could help some visual factors (e.g., Pattern) but hurt others (e.g., Texture). We speculate standard fine-tuning introduces spurious correlation [a]. This may lead to a bias for CLIP towards specific visual properties, thereby compromising factor-level robustness on some factors. Further, fine-tuning CLIP with contrastive loss maintains shape-bias in prediction (Q1/ Reviewer A6cY). Moreover, as suggested by Reviewer A6cY (Q3), data augmentations used in zero-shot CLIP could improve robustness to related factors, but with spill-over effects to unrelated factors. [a] Masked Images Are Counterfactual Samples for Robust Fine-tuning > Q3: “TS reveals a consistent trend of CLIP … distinct from other models” First, Y-axes are ECE (first two columns) and NLL (last two columns) of Fig. 4. We use “Temp-scaled” to denote TS is used for calibration. Second, based on feedback, we restate the observations of Fig. 4 to make it clear: 1) Before TS, all zero-shot CLIP models from different sources and subsets do not have a unified trend; 2) Post TS, all zero-shot CLIP models exhibit a similar trend. Also, other models do not follow this trend of CLIP. We did not claim other models have a trend instead they are scattered as reviewer mentioned; 3) Post TS, zero-shot CLIPs lie below most other groups models: with similar classification accuracy, CLIP tends to achieve lower calibration error than other models. > Q4: Discussion on Novelty We respectfully disagree with the statement that our work is not novel. We contribute a comprehensive study to better understand CLIP's robustness. [New perspectives] Following analytical approach in Ref [2], we study crucial factors for CLIP: 1) training source, 2) dataset quantity, 3) test-time prompt, 4) contrastive loss, 5) fine-tuning, and 6) architecture. Through extensive analysis, we underscore the data source's critical role across three new perspectives while also uncovering overlooked factors, such as data quantity's impact on calibration. To our knowledge, we are the first to study the impact of these factors on the three new perspectives. [New observations] Unlike prior works (e.g., Ref [1-3]) focusing on overall classification, we further explore new objectives and obtain more insights. For instance, high overall classification robustness does not guarantee robustness to individual visual factors and the shape bias in CLIP predictions. Our investigation also uncovers under-explored aspects. Key observations include the reduction of shape bias in CLIP predictions after ImageNet fine-tuning. Contrary to previous observations, CLIP models are not consistently more calibrated than other ImageNet models, owing to training data distribution and quantity. The impact of training source and fine-tuning strategies is evident in their impact on OOD detection performance. [Benchmark Application] In line with recent study [b] that highlights the importance of dataset curation in CLIP, our benchmark provides comprehensive metrics to full assess the curated datasets, alongside classification accuracy. [b] DataComp: In search of the next generation of multimodal datasets
Summary: Authors closely study the robustness of vision-language models. They try to investigate their robustness in terms of common visual attributes, detecting OOD inputs, and their power in providing calibrated predictions. They consider many different CLIP models and other vision encoders with different architectures and training procedures to have a comprehensive study and fairly compare CLIP models with other ones. They provide some more detailed findings about these models w.r.t to the aforementioned criteria. Strengths: + This paper runs an extensive set of experiments using various models, various datasets, and under different settings. + Therefore, these results will be insightful for practical use cases where people want to decide which model to use or diagnose possible errors/failures of their models on different conditions. Weaknesses: + I didn't see enough new ideas in this paper. + I mean, running extensive studies is definitely valuable, practical, and insightful, but is there other similar work published in NeuriPS where scaling up and running more experiments is the main contribution? I would appreciate it if the authors correct me in understanding their main contribution and change the rating correspondingly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are these results generalizable and reliable? i.e., are we going to see similar trends for future CLIP models as well? I would want to get some more insights about these findings. At this point, this study looks completely empirical and I am not convinced if they are statistically significant. Can we really trust those plots and argue some general statements? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1: Did not see enough new ideas ... running extensive studies is definitely valuable, practical, and insightful, but is there other similar work published in NeurIPS where scaling up and running more experiments is the main contribution? We appreciate your recognition of our extensive studies and experiments. We clarify that our contributions go far beyond the scale of experiments: - New perspectives: In contrast to existing analysis paradigms (e.g., [a,b,c]) centred around overall classification accuracy, we advocate for the integration of three novel safety-driven objectives: factor-level robustness, OOD detection, and uncertainty calibration. This approach allows us to thoroughly assess and understand the impact of critical factors on CLIP models’ robustness, including training source, quantity, network structure, test-time prompt, and fine-tuning strategy. - New observations: Our extensive investigation into CLIP models uncovers several previously under-explored aspects. Key observations include the reduction of shape bias in CLIP predictions following ImageNet fine-tuning. Contrary to previous assumptions, CLIP models are not consistently more calibrated than other ImageNet models, owing to training data distribution and quantity. The significance of training sources and fine-tuning strategies is evident in their impact on OOD detection performance. Furthermore, while test-time prompts do not affect CLIP's visual factor-level robustness, they influence the trends in OOD detection and uncertainty calibration. - Comprehensive data curation metric: Aligned with recent research [d] underscoring the importance of training dataset curation, our benchmark introduces comprehensive metrics to evaluate curated datasets alongside their classification performance. [a] Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP, NeurIPS'22 [b] Data Determines Distributional Robustness in Contrastive Language Image Pre-training (CLIP), ICML'22 [c] The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning, TMLR'22 [d] DataComp: In search of the next generation of multimodal datasets, Arxiv 2023 > Q2. Are these results generalizable and reliable? ... similar trends for future CLIP models as well? Can we really trust those plots and argue some general statements? Thanks for raising this discussion. **First**, to ensure the validity and reliability of our study, we followed established practices [a,b,c] meticulously, giving careful consideration to various factors such as training sources, network architectures, fine-tuning procedures, test datasets, and other comparable ImageNet models. We adopt methodologies from previous research to study each objective: ImageNet-X [e] for analyzing visual factor-level robustness; Cue-conflict stimuli [f] to examine shape bias in model predictions; [g] to gauge CLIP's zero-shot OOD detection capabilities; and [h] to delve into the quality of uncertainty estimation. [e] ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations, ICLR'23 [f] ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness, ICLR'19 [g] Delving into Out-of-Distribution Detection with Vision-Language Representations, NeurIPS'22 [h] Revisiting the Calibration of Modern Neural Networks, NeurIPS'21 **Second**, we firmly believe that the insights gained from our analysis extend beyond the CLIP models evaluated in this study and are applicable to future models as well. To validate this point, during the rebuttal, we expanded our investigation to include the very latest CLIP models (beyond the submission deadline). These models are trained on DataComp or CommonPool [d]: they are trained on the same training source with LAION but with different dataset quantities. The results shown in Figure R-3 in the uploaded PDF have demonstrated that our observations hold true for these newly included models, further affirming the scalability and generalizability of our findings. **Third**, to facilitate further research and analysis on CLIP models, we will release our experimental setups, including the database and plot codes. The above discussion will be included in the revised version. --- Rebuttal Comment 1.1: Comment: Dear authors, thanks for your response. i increased my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer rieU, Thank you for your positive assessment and helpful suggestions on our work. Best, Authors
Summary: This paper analyzes the CLIP model's robustness through a large number of experiments, including three main points: resilience to visual factor variations, calibrated uncertainty estimations, and the ability to detect anomalous inputs. Strengths: 1. The experiments in this paper are very sufficient, the research content is solid, and some new views are proposed from the experimental results. There is a rich analysis of the robustness of CLIP. 2. This paper makes a significant contribution to the field by providing a comprehensive evaluation of CLIP models. Furthermore, the experimental findings presented in this paper offer valuable insights for future endeavors aiming to enhance the out-of-distribution (OOD) detection performance and robustness of CLIP models. Weaknesses: There is no deeper analysis of the reasons behind these experimental results in this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Please provide some details about the experiments. 2. I want to ask whether parameter-efficient fine-tuning (PEFT) methods, such as LORA or Adapter, hurt the performance of OOD detection, given that fine-tuning sometimes can. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors has provided the limitations of their work. This paper has done a lot of experiments and provided some novel discoveries that I think contribute to the CLIP community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1. There is no deeper analysis of the reasons behind these experimental results in this paper This work emphasizes the incorporation of three new safety-driven objectives: factor-level robustness, OOD detection, and uncertainty calibration. This enables a comprehensive assessment of critical factors on CLIP models' robustness, encompassing training source, quantity, network structure, test-time prompt, and fine-tuning strategy. Our benchmark highlights the significance of training sources in this context. Building on the insightful comments from all reviewers, during the rebuttal, we delved deeper into several intriguing aspects, including the performance difference between LAION and WIT (Q4/Reviewer A6cY), the retention of shape bias using contrastive loss for fine-tuning (Q1/Reviewer A6cY), the possibility of spill-over effects from data augmentation impacting unrelated robustness factors (Q3/Reviewer A6cY), and potential data overlaps in OOD detection (Q5/Reviewer jPAm). We acknowledge the potential for deeper analysis of our observations and consider the above discussion as a starting point that could inspire further research. >Q2. Please provide some details about the experiments In the appendix, we provide illustrations of the publicly available models and datasets used in our study. We follow: ImageNet-X [a] to conduct analysis on models' visual factor level robustness; Cue-conflict stimuli [b] to study shape-bias in model decisions; [c] to understand the CLIPs' performance on zero-shot OOD detection and [d] to investigate the quality of uncertainty estimation. [a] ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations, ICLR'23 [b] ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness, ICLR'19 [c] Delving into Out-of-Distribution Detection with Vision-Language Representations, NeurIPS'22 [d] Revisiting the Calibration of Modern Neural Networks, NeurIPS'21 Furthermore, we are dedicated to facilitating future research on CLIP analysis, and as part of this commitment, we will release the plot codes and evaluation codes for the community to use and build upon. >Q3. I want to ask whether parameter-efficient fine-tuning (PEFT) methods, such as LORA or Adapter, hurt the performance of OOD detection, given that fine-tuning sometimes can Insightful point. During the rebuttal, we studied the impact of parameter-efficient fine-tuning methods on OOD detection. In Figure R-2 of the uploaded PDF, we report the results of 8 CLIP models fine-tuned by CoOp[e] and Tip-Adapter[f]. We find that both two methods increase the classification accuracy of CLIP models, while decreasing the OOD detection performance. It would be interesting to further study the effect of PEFT methods on OOD detection. Also, increasing both classification and OOD detection performance would be a promising direction. [e] Learning to Prompt for Vision-Language Models, IJCV'22 [f] Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification, ECCV'22 --- Rebuttal Comment 1.1: Comment: Dear Reviewer 9D3z Could you please kindly check the other reviews and rebuttal, and raise your concerns if you have any? We are already close to the end of the author-reviewer discussion phase. Thanks, Regards, AC
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your detailed and thoughtful feedback. Inspired by your valuable suggestions, we have added more experimental analyses and included the suggested discussions. We summarize the experiments in the uploaded PDF: - In Fig. R-1, we evaluate the retrieval performance of zero-shot CLIPs. We observe retrieval performance of CLIPs correlates with their classification performance. Also, the performance trend is influenced by training dataset distribution. - In Fig. R-2, we study the OOD performance of 8 new CLIP models finetuned by parameter efficient fine-tuning methods: CoOp and Tip-Adapter. Both methods improve models’ classification performance but lead to performance drops in OOD detection. - Fig. R-3 includes post-submission CLIP models and three MAE pre-trained models. We observe they are aligned with our observations in the main paper. - In Fig. R-4, we expand the OOD benchmark on NINCO, using both MSP and Mahalanobis detectors. Our findings in Section 5 of the main paper align consistently with NINCO results. Also, zero-shot CLIP shows promising accuracy, compared to other ImageNet models with Mahalanobis detector. - Moreover, we show the retention of shape bias using contrastive loss for fine-tuning (Q1/Reviewer A6cY) Last, we greatly appreciate all reviewers recognize our study is comprehensive, sufficient, and solid. We emphasize that our main contributions go beyond just the scale of experiments. Specifically, our study unveils three novel perspectives (factor-level robustness, OOD detection, and uncertainty calibration) that contribute to a comprehensive understanding of CLIP models' robustness. We hope our response has addressed the initial concerns. Please let us know if you have any other questions. Kind Regards, Authors Pdf: /pdf/c53bebb44d7d8f66b33c945b1fc5002f3bd44d85.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims to provide a comprehensive evaluation of robustness for pretrained vision-language models. Specifically, the authors benchmark around 100 pretrained models/classifiers. Based on these empirical results, this paper also provides corresponding discussions and analysis. Strengths: 1. The robustness problem of CLIP like model is a valuable topic to study. 2. This paper provides a very comprehensive benchmark for CLIP robustness problem. It may contribute to several benefit for the following research. 3. Corresponding discussion and analysis are solid to further inspire study in this area. Weaknesses: Even if I am still concerning the technical contribution of this paper for the NeurIPS conference, I recognize the workload and benchmarking work of this paper. Thus, only a little comments for the weaknesses: 1) What about further adding image-text retrieval evaluation and analysis? since CLIP can be used for both classification and retrieval. It may make this paper more solid. 2) I would like to discuss with other reviewers about the contribution significance of this benchmarking work to adjust my final score. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations have been discussed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Q1. Further adding image-text retrieval evaluation and analysis ... make this paper more solid Thanks. Following this Insightful suggestion, we included retrieval tasks on MS-COCO during the rebuttal. Figure R-1 in the updated PDF reports the results. We plot retrieval performance (image-to-text and text-to-image retrieval) relative to zero-shot image classification accuracy on the ImageNet validation set. On both retrieval tasks, we observe that retrieval performance correlates with image classification performance. In addition, CLIP models trained on different training sources appear to have different trends. We will include and discuss this new analysis in the revised version. >Q2. Contribution significance of this benchmarking work Thanks for raising this discussion. Beyond the existing analysis of CLIP's classification robustness, our study advocates for the integration of three novel safety-driven objectives: factor-level robustness, OOD detection, and uncertainty calibration. This allows us to thoroughly assess and understand the impact of critical factors on CLIP models’ robustness, including training source, quantity, network structure, test-time prompt, and fine-tuning strategy. Through extensive and careful analysis, our benchmark underscores the impact of training sources across three new objectives. Furthermore, our extensive studies have unveiled several previously unknown aspects of CLIP models, deepening our understanding of their robustness behaviours. For instance, we demonstrate the shape bias in CLIP models' predictions, which diminishes after fine-tuning on ImageNet. Contrary to existing findings, CLIP models are not always more calibrated than other ImageNet models, and we attribute this to the impact of both training data distribution and quantity. Also, training sources and fine-tuning procedures have crucial effects on their OOD detection performance. Furthermore, test-time prompts do not impact CLIP's visual factor-level robustness but influence the performance trend of OOD detection and uncertainty calibration. Moreover, as very recent research [a] highlights the importance of training dataset curation, our benchmark provides comprehensive metrics to assess the curated datasets, alongside the classification performance. [a] DataComp: In search of the next generation of multimodal datasets, Arxiv 2023 --- Rebuttal Comment 1.1: Title: discussion Comment: Dear Reviewer 8Bsp Could you please kindly check the other reviews and rebuttal, and raise your concerns if you have any. We are already close to the end of the author-reviewer discussion phase. Thanks, Regards, AC
null
null
null
null
null
null
Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability
Accept (poster)
Summary: The paper presents a method to perform calibrated simulation-based inference. To do so, the paper employs the well-known coverage and proposes a way to differentiate through this term and to use it as a regularizer during training. The authors evaluate their method on benchmark tasks and conclude that it has good coverage and (sometimes) even outperforms existing methods in terms of log-likelihood. Strengths: **Originality**: The method is novel and the use of a differentiable sorting algorithm for differentiating through coverage is novel and useful. **Quality**: The theoretical part of the paper is done rigorously and the paper provides additional empirical results for the impact of hyperparameters and computational cost. **Clarity**: The figures are clear and support the messages of the paper. Weaknesses: **Quality**: I expect that the method is very expensive if the batch size is large because in this case, GPU will not help either. This should be clarified in the paper. I found it very interesting that CA1NPE outperforms NPE in terms of log-likelihood. What exactly are the methods that the authors use to prevent overfitting? Always training for 500 epochs is clearly not something anybody would do in practice (L497). Please use a proper implementation of NRE or NPE to draw comparisons. The poor results of Appendix Fig 6 should be mentioned in the main paper and it should be highlighted that the method can be used to produce conservative posteriors, but that it is not suitable to produce calibrated posteriors. All tasks in the paper are very low dimensional. I would expect the NPE version of this algorithm to scale to high-dimensional asks, but this would need emprical evidence. For NRE, I could imagine that Importance sampling requires exceedingly many samples in high-d parameter spaces. Please clarify and ideally add tasks with a more high-dimensional parameter space. **Clarity**: Twice (L159 and L183), the authors propose alternative formulations of their method. They never empirically investigate these formulations and also do not describe why they are less good. I would appreciate if the authors either add additional details on these methods or remove them entirely to avoid confusion. The paper introduces **many** symbols which makes the paper very tedious to read. I would appreciate if the authors would redefine symbols in new sections to make the paper easier to follow. Also, some abbreviations do not really have to be defined (rarely used, e.g. KS, SAE, STE). Many papers are listed as `arxiv` although they got published. Please fix. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Why is the sorting based-computation preferrable over the direct computation? The authors say that `we need to backpropagate through F_N(\alpha_k)`. Is this a problem in practice? L472: what do you mean by “proposed regularizer set to calibration objective” L505: what does “outstanding values in observations” mean? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors state limitations of their method, but some limitations have to be highlighted (see comments above, compute time for large batch sizes, poor performance for calibrated (not convservative) posteriors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for taking the time to review our manuscript and for your comments. Below we would like to address the questions: - *I expect that the method is very expensive if the batch size is large because in this case, GPU will not help either. This should be clarified in the paper.* In L262 we say that the advantage of computation on GPU is due to the parallelization. Implicitly, this means that once <parameters dimensionality X batch size X number of samples $L$> exceeds the available memory, parallelization, and consequently GPU advantage is limited. We will explicitly mention this threat in the revised version of the manuscript. - *I found it very interesting that CA1NPE outperforms NPE in terms of log-likelihood. What exactly are the methods that the authors use to prevent overfitting? Always training for 500 epochs is clearly not something anybody would do in practice (L497). Please use a proper implementation of NRE or NPE to draw comparisons.* We use the same training protocol for both NPE and CalNPE that was used in Hermans et al., 2022 - literally, the same implementation that is available online. Although we run 500 epochs of training in total, only the best model based on validation log-likelihood is kept as the final model. In addition, Gradient Norm Clipping is used. We consider the implementation from Hermans et al., 2022 to be proper (which is based on some standard packages used in the field of SBI). Moreover, both the regularized and non-regularized versions use the same implementation allowing us to draw conclusions. - *The poor results of Appendix Fig 6 should be mentioned in the main paper and it should be highlighted that the method can be used to produce conservative posteriors, but that it is not suitable to produce calibrated posteriors.* We will add this information in the revised version of the manuscript. - *All tasks in the paper are very low dimensional. I would expect the NPE version of this algorithm to scale to high-dimensional asks, but this would need empirical evidence. For NRE, I could imagine that Importance sampling requires exceedingly many samples in high-d parameter spaces. Please clarify and ideally add tasks with a more high-dimensional parameter space.* Following the nowadays state-of-the-art evaluation proposed in Hermans et al., 2022 already imposes a huge computational effort. We see the high-d parameter spaces as an important challenge and list it as a future work direction. However, with the available computational resources we are unable to provide empirical evidence in the short term. - *Many papers are listed as arxiv although they got published. Please fix.* In the revised version of the manuscript, we will update all the references that got published since we submitted the manuscript. - *L472: what do you mean by “proposed regularizer set to calibration objective”* Minimizing calibration error not the conservativeness error. We will clarify this information in the revised version of the manuscript. - *L505: what does “outstanding values in observations” mean?* The experimental protocol of Hermans et al., 2022 (which we follow) did not include data standardization. The highest value encountered in observations for the largest simulation budget of M/G/1 was 4e+7 which is the “outstanding value” in L505. However, it was unlikely to happen, and therefore for the small simulation budgets, the extreme observations were less extreme leading to a stable learning process. We will replace “outstanding value” with “extreme value” to avoid confusion. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you very much for the detailed response. It clarifies most of my concerns, and I have increased my score to 6. In my opinion, the main limitation remains that the method is only being demonstrated on very low-dimensional parameter spaces, in particular because importance sampling could lead to issues in more high-D parameter spaces. It would be amazing if the authors could add such analyses for the camera ready version, but I do think that this is a strong paper either way. Congrats on this nice work!
Summary: The paper proposes a calibration term to be used directly in the training objective of NREs and NPEs. The paper shows that the introduction of this term achieves competitive or better results in terms of coverage and expected posterior density. Strengths: * The quality of the writing and presentation is high, with the structure easy to follow. In particular the related work is clearly included in the introduction. * The topic is of importance as it is increasingly more common to use neural estimators in SBI applications. * The experimentation is sufficient to show the effect of incorporating the new regulariser. It seems to show that the new regulariser leads to slightly conservative estimators that are on average better calibrated than the baselines. Weaknesses: * The paper does not appear to have any major weakness. A few minor weaknesses that seem to exist have been appropriately described in the paper. These are the computational cost of the approach and the fact that the paper’s main metric of performance is also the same one that has been used in the regulariser. If the authors could think of a different metric to use to evaluate performance then the paper would be further strengthened. However, it is appreciated that finding a new metric would potentially be a new paper in its own right. Perhaps including C2ST as a metric in the appendix might provide additional comparison. While the focus of the paper is on calibration, it is possible to get a perfectly calibrated, but badly performing estimator. * There is no mention of hyperparameter optimisation (although a sensitivity analysis is given). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Line 206: Does IS provide an improvement for NPE, even when it is not needed? * Since the regulariser relies on importance sampling did the authors do any experimentation on how the approach scales with dimension of $\theta$? This was mentioned in future work, but even in the experiments in the paper the theta varies across experiments. What are the dimensions of the experiments included in this paper? And did the authors see that the higher dimensional ones performed worse with the new regulariser? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: * The limitation related to computational cost is sufficiently highlighted in the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for taking the time to review our manuscript and for your comments. Below we would like to address the questions: - *Line 206: Does IS provide an improvement for NPE, even when it is not needed?* Intuition suggests that IS should introduce noise and worsen the results. In a limited study (please see Figure 3 of the global response PDF) we did a sanity check on two problems (Weinberg - 1D posterior; Lotka Volterra - 2D posterior) where IS is not used for NPE. The results show that IS with 16 samples (submitted manuscript) gives almost the same outcomes as directly sampling from the approximate posterior with the reparameterization trick (Figure 3 in global response PDF). We expect the comparison to look very different for moderate and high-dimensional posteriors. - *Since the regulariser relies on importance sampling did the authors do any experimentation on how the approach scales with dimension of ? This was mentioned in future work, but even in the experiments in the paper the theta varies across experiments. What are the dimensions of the experiments included in this paper? And did the authors see that the higher dimensional ones performed worse with the new regulariser?* We did experiments only with the problems mentioned in the submitted manuscript. The dimensions of posteriors in the submitted manuscript are as follows: SLCP - 2D; M/G/1 - 3D; Weinberg - 1D; Lotka Volterra - 2D; Spatial SIR - 2D; Gravitational Waves - 2D. All of the studied problems are of the same order of magnitude in terms of the dimensionality of parameters (which is not the case for the dimensionality of observations but this has limited impact on the regularizer), therefore we cannot draw conclusions about scaling. We hold that this should be verified in subsequent work. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks - confirming that I have read the rebuttal.
Summary: The authors suggest a new objective function for simulation-based inference that adds a penalty term to the “expected score” objective that is used by many other works. This penalty term encourages the resulting posterior approximations to be well-calibrated in the sense that the $1-\alpha$ highest probability density regions contain the ground-truth parameter $100(1-\alpha)$% of the time. The penalty term is an extension of the Kolmogorov-Smirnov test statistic for goodness of fit between a $U(0,1)$ distribution and the rank statistics of eq. 7, which are asymptotically uniformly distributed when $p(\theta \mid x)$ is calibrated. Minimizing this penalty term alongside the usual objective function should yield posteriors that are calibrated while substantially different from the degenerate case of the prior (which is trivially calibrated). Strengths: * The proposed penalty/regularization term is relatively lightweight computationally, and can be tacked on to many existing simulation-based inference algorithms. * The intuition is simple; the construction of the statistics $\hat{\alpha}$ is clever, and the Kolmogorov-Smirnov test statistic is well-understood. * The method is assessed on a variety of problems from the test suite of benchmarks provided by Lueckmann et al. (2021), and performs favorably in that the resulting posteriors are either calibrated or tend to be conservative. Weaknesses: * Although synthetic likelihood and ABC approaches are mentioned in the introduction, the proposed method seems prohibitively costly in these scenarios, and seems geared toward amortized methods only, where eq. 7 can be computed rapidly. * For ratio estimation in particular, the non-regularized NRE is better calibrated than the corrected version in some cases (e.g., spatial SIR), suggesting that it is at least possible the penalty term can worsen calibration. * In practice, it appears that attempts at calibration often result in conservative rather than calibrated posteriors; while this is likely still preferable to many users to the alternative of *not* adding this correction, it suggests a bit of a misnomer. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * The method called neural posterior estimation (NPE) has been shown in other work to be equivalent to minimizing a forward KL divergence between exact and approximate posteriors (Reweighted Wake-Sleep (Bornschein and Bengio, 2015), and a similar work “Revisiting Reweighted Wake Sleep…” (Le et al., 2018)); it’s been argued in these that the resulting neural posterior estimates tend to be overdispersed or conservative as a result. 1) How does this impact the relevance of this work if the baseline is usually already conservative? 2) The NPE row of Figure 1 doesn’t seem to reflect this intuition in practice; is there any explanation for why this is so? Is NPE indeed the same as sleep-phase in reweighted wake-sleep in the experiments? * What are the advantages and disadvantages of using either the sorting-based computation or the direct computation method? Is the use of the latter solely due to the belief that backprop through indicator functions is somehow worse? As $\hat{\alpha}$ itself requires indicator functions, I suppose the backprop operation can’t be too problematic. It would be interesting to see both methods implemented in the experiments. * The form of eq. 9 suggests self-normalized importance sampling, but as $\hat{p}(\theta \mid x^*)$ is already normalized, this doesn't seem necessary. Maybe this is a misunderstanding of notation. To my reading, though, just the numerator of eq. 9 is a valid importance sampling estimator. Is so-called “standard” or self-normalized importance sampling nonetheless used? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Generally the settings where this work may be applied are clear, but as mentioned above discussing the use of the method in amortized vs. non-amortized settings may make more clear the costs associated with with this method when $\hat{p}(\theta_j \mid x^*)$ can’t be computed quickly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for taking the time to review our manuscript and for your comments. Below we would like to address the questions: - *Although synthetic likelihood and ABC approaches are mentioned in the introduction, the proposed method seems prohibitively costly in these scenarios, and seems geared toward amortized methods only, where eq. 7 can be computed rapidly.* We would like to clarify that the proposed method is intended only for amortized methods. Synthetic likelihood and ABC are mentioned only to give a more complete context of SBI, we will make that more clear in the text. - *For ratio estimation in particular, the non-regularized NRE is better calibrated than the corrected version in some cases (e.g., spatial SIR), suggesting that it is at least possible the penalty term can worsen calibration.* The non-regularized NRE for Spatial SIR is not conservative for the first three simulation budgets, while CalNRE is conservative for all of the analyzed budgets which is the improvement that we intend to introduce with the proposed method. Indeed, the distance of the coverage curve from the diagonal may increase when using the proposed regularization term, but this is not a violation of the objective in eq. (6) re-formulated in L127 for conservativeness. - *In practice, it appears that attempts at calibration often result in conservative rather than calibrated posteriors; while this is likely still preferable to many users to the alternative of not adding this correction, it suggests a bit of a misnomer.* We would like to clarify that the results presented in the main part of the submitted manuscript (Figures 1 & 2 + Figure 5 in the Appendix) are obtained with the objective of minimizing conservativeness error (L130), not calibration error. The result of the latter, we show in the Appendix (Figures 6-8) and find it difficult to draw any firm conclusions. Finding a calibrated approximate posterior is a very difficult task and we make no claim that the proposed method is able to systematically provide it. We will emphasize this in the revised version of the manuscript. - *The method called neural posterior estimation (NPE) has been shown in other work to be equivalent to minimizing a forward KL divergence between exact and approximate posteriors (Reweighted Wake-Sleep (Bornschein and Bengio, 2015), and a similar work “Revisiting Reweighted Wake Sleep…” (Le et al., 2018)); it’s been argued in these that the resulting neural posterior estimates tend to be overdispersed or conservative as a result. 1) How does this impact the relevance of this work if the baseline is usually already conservative? 2) The NPE row of Figure 1 doesn’t seem to reflect this intuition in practice; is there any explanation for why this is so? Is NPE indeed the same as sleep-phase in reweighted wake-sleep in the experiments?* The training objective of NPE is indeed the same as the one used in sleep-phase of RWS, but with instances $(\theta, x)$ coming from a fixed training dataset (measured in the field or sampled from a simulator) in contrast to “dream samples” from the trained generative network in RWS. So the problem solved by these two methods is fundamentally different. The NPE row of Figure 1 is reprinted from Hermans et al., 2022 where in an intense empirical study the authors identify multiple methods (including NPE) to be over-confident (non-conservative) for a number of benchmark problems in SBI – in a limited study we confirmed their results (not reported in the submitted manuscript). In our work, we use the same benchmark problems, and with the application of the proposed regularizer are able to train models that are conservative in cases where the non-regularized models were not. We do not claim that the use of our method is necessary, but rather we propose it as a solution when over-confidence is identified. In addition, we would like to note that Hermans identifies two standard solutions in machine learning as often effective in the analyzed problems, i.e. increasing the amount of the training data and ensembling. Therefore, it is enough that in the cited works in the field of RWS, the budget for simulations during training was large enough that the problem of over-confidence did not arise. Meanwhile, in SBI, obtaining a sufficient dataset can be very expensive, or even impossible. - _The form of eq. 9 suggests self-normalized importance sampling, but as $\hat{p}(\theta | x^*)$ is already normalized, this doesn't seem necessary. Maybe this is a misunderstanding of notation. To my reading, though, just the numerator of eq. 9 is a valid importance sampling estimator. Is so-called “standard” or self-normalized importance sampling nonetheless used?_ Indeed, the self-normalized importance sampling is given in eq. (9) and also used in our experiments. For NPE the denominator is completely unnecessary because the approximate posterior is normalized by construction. However, in the case of NRE, there is no such guarantee (we only hope that this is achieved at convergence) which exposes a slight abuse of notation on our side. Therefore, self-normalized IS is advisable for NRE and to have a single implementation we use it also for NPE. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thanks to the authors for the detailed point-by-point response. My main question about the instances of over-confidence (i.e. lack of conservativeness) for NPE has been answered by the use of a fixed dataset for training. As this has been reproduced from and discussed extensively in Hermans et al., I’m satisfied with these. The authors have indicated that they will make more clear the distinction between amortized and non-amortized approaches as well, which is appreciated. My primary remaining critique is on the use of “calibration” as the selling point of the paper. While I agree that conservative posteriors are beneficial, they can also be obtained trivially (e.g., take the prior), and I’m not sure I agree with the authors that “... we make no claim that the proposed method is able to systematically provide [calibrated posteriors]” due to the title. The authors have indicated that they will address this in the exposition and I hope this is done. A new concern I have noticed is that in the experiments, a higher computational budget seems to result in more conservative posteriors sometimes. In particular, I’m looking at the M/G/1 column of Figure 1 in the main body. In the “Cal” versions, the yellow line corresponding to the highest computational budget is either the most or second-most conservative on average. I would appreciate the authors’ comments on this point. This compares unfavorably with, say, BNRE which seems to maintain conservativeness, while becoming more and more calibrated as the computational budget increases (e.g., in the SIR plot for BNRE). --- Reply to Comment 1.1.1: Comment: - "...higher computational budget seems to result in more conservative posteriors sometimes..." To address the question we will use Figure 5 located in the B section of the Appendix. There, one finds that BNRE indeed tends to yield AUC closer to zero as the simulation budget increases, while our proposed regularizer (CalNRE and CalNPE rows) does not exhibit such a behavior. We consider it to be consistent with the properties of the proposed solution when the conservativeness error is minimized (results presented in Fig 1, 2, and 5), because once the model yields conservative posteriors there is no penalty from the regularizer, and thus the training objective does not distinguish between "more conservative" and "less conservative" models. It turns out, that in such a regime the main loss term favors less-confident models when exposed to more training data. This may seem counter-intuitive at first (maximizing predictive performance -> minimizing under-confidence) but is the desired phenomenon when the inverse problem is ambiguous (ground-truth posterior is not a Dirac delta), because a higher simulation budget means exposure to more diverse samples that should result in less-confident models. In fact, this is exactly what we observe for the non-regularized models, therefore we conclude this is a behavior of the NRE/NPE approach itself. If bringing the AUC as close to zero as possible is desired we hypothesize that adding a low-weighted calibration error term in the late phase of training (or adding it progressively) could bring some improvement. However, we did not conduct such experiments and now see it as a potential future work direction.
Summary: The paper introduces a training algorithm for posterior distribution learning in the likelihood free setting that combats overconfident models. The authors focus on the expected coverage probability (ECP) from Hermans et al., 2022 to measure if a model posterior is conservative. Specifically, when its value is equal to the credibility level used in its calculation, then the posterior is calibrated. An equivalent condition is that the distribution of credibility levels constructed from samples (lemma 1) is uniformly distributed. With this in mind, the authors propose a regularizer that penalizes how far the distribution of these credibility levels are from a uniform distribution. The main contribution of the paper is the algorithm used to train with this regularizer. The authors use a variety of techniques in their algorithm including a one-sample Kolmogorov-Smirnov test to work with finite samples, differentiable sorting to implement the test, and importance sampling to choose useful samples to use. The experimental results demonstrate that the learned models are more conservative than their unregularized counterparts while not sacrificing performance in terms of likelihood. Furthermore, the authors showed the effect of the hyperparameters on performance and also showed the that the training algorithm can be expensive if not run on GPUs. Strengths: - Directly addresses the "trust crisis in simulation-based inference" described in Hermans et al., 2022. - Strong empirical results to demonstrate that the training algorithm works. - The paper progressed smoothly from the background to the proposed algorithm. - The layout of the text and plots are visually easy to digest. Weaknesses: - It wasn't too clear to me after reading the main text why having uniformly distributed credibility levels was the right thing to want. - Similarly, it took a bit of drawing to see how all of the values in section 2 were related to each other. Adding something like figure 9 to the main text could greatly help introduce the background material. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why choose the one-sample Kolmogorov-Smirnov test to measure the divergence over others? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors acknowledge that their method is computationally expensive and may not scale to high dimensions as is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for taking the time to review our manuscript and for your comments. Below we would like to address the questions: - *Why choose the one-sample Kolmogorov-Smirnov test to measure the divergence over others?* The one-sample Kolmogorov-Smirnov test was our first choice. Its test statistic is straightforward to use as a minimization objective. The differentiable relaxation was also easy to find. We did not investigate alternative approaches. As common alternatives for the KS test, the Anderson–Darling and Cramér–von Mises tests are typically listed. Both of them require ordered samples, thus only the sorting-based (alternatively, ranking-based) computation is available. Moreover, both of the tests rely on tabulated critical values which necessitates the introduction of significance level and is less convenient to use as a minimization objective. --- Rebuttal Comment 1.1: Comment: Thanks, that makes sense. I don't have anything other questions, the contributions are relevant and clearly presented in the paper.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you very much for taking the time to review our manuscript and for your comments. Below we would like to address a question that has come up in several reviews: *What are the advantages and disadvantages of using either the sorting-based computation or the direct computation method?* First, we would like to underline that we made no claims of the superiority of the sorting-based computation (the one used in the experiments in the submitted manuscript) over the direct computation. Sorting-based computation happened to be the first one we evaluated empirically and since the method seemed to perform well we did not investigate the alternative approach in the light of limited computational resources. Our recent empirical results - Figure 1 and Figure 2 in global response PDF - show that using direct computation instead of sorting-based computation while keeping all the remaining hyper-parameters untouched, leads to performance degradation both in terms of coverage and log-posterior density. $D_N$ is evaluated over 128 (same as the batch size) randomly sampled (in every iteration) levels on the (0,1) interval. We also observed optimization stability issues that reveal themselves in high variance over random initialization of the expected value of the approximate log posterior density of the nominal parameters for SpatialSIR in Figure 2 of the global response PDF. We hypothesize that the poor performance of direct computation is due to the double use of a straight-through estimator through the indicator function - first in eg. (7) and then in eq. (8) - with the same backward relaxation. We see this issue as a direction for further investigation. Moreover, differentiable sorting can be seen as a straight-through estimator of a piecewise linear function suggesting an equivalence between sorting-based computation and direct computation but with different backward relaxations. Pdf: /pdf/54b8567ff7a3c337b45664254b7d9c08d1f77bfb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
An Efficient and Robust Framework for Approximate Nearest Neighbor Search with Attribute Constraint
Accept (poster)
Summary: The paper introduces a novel approximate nearest neighbor search framework which baked in attribute constraints via a single composite index compared to many existing two stage solutions. The framework is mainly relying on a newly proposed distance function that fuses both feature vector distances and attribute vector distances. Based on the new distance function, several proximity graph based ANNS methods have been developed. The paper presents an ablation study showing the merits of the methods. The experimental results shown on multiple popular landmark datasets outperform baseline methods. Strengths: The proposed fusion distance is an excellent idea that enables the opportunities of the single composite index construction as well as the advantages over the legacy two-stage models. Each step of the method has been well-explained. In addition, most of the components have provided theoretical proof or support analysis. ANN problems are often complicated in comparisons considering the nature of trade-offs among different factors including memory, accuracy, speed, etc. The paper has conducted sufficient experiments, ablation studies, and explanations of results. These strengthen the hypnosis and make the work overall to be solid. Weaknesses: Details about config or setups for some baselines are missing. For example, the parameters or usages of FAISS are not discussed. It makes it harder to understand why FAISS gets saturated at recall 80% as shown in Section 5.2. Is it possible that the number of probes in IVF has not increased sufficiently? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: There will be other challenges as the number of attributes increases. It may be worth discussing solutions if there are very high diversities of attributes among the data to be indexed. For example, different categories in product search may not share the same attributes (names). The total number of attributes could be considerably large, and it will also increase the attribute vector distance computation cost. The two stage approaches, e.g. AF + ANNS, are less likely to be constrained. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No limitations of the work are discussed in the paper. Please refer to my questions and comments in Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Experimental Details and Faiss Recall Issues** Thank you for your suggestions. Due to space limitations, we included the environment configuration and setups for the baselines in the appendix. We apologize for the omission of important details in the main paper. We'll improve the organization of the experiments section and include essential information such as experimental setup, datasets, and baseline descriptions in the main paper. Additionally, we'll also provide detailed elaborations in the appendix. In FAISS, we utilize the popular IVFPQ algorithm. In the indexing phase, IVFPQ first divides the database vectors into coarse clusters using k-means and then applies product quantization (PQ) to compress the vectors into compact codes. PQ divides each vector into $M$ subvectors and assigns each subvector to one of $K$ centroids in a codebook. The codebook is learned by performing *k*-means on the subvectors. The PQ code of a vector is the concatenation of the cluster IDs of its subvectors. This representation allows each vector to occupy $M$ bytes, where $M$ is typically 8 or 16 or 32. In the searching phase, IVFPQ first identifies the nearest coarse clusters to the query vector. It then scans the inverted lists of these clusters to find the nearest PQ codes to the query vector using asymmetric distance computation (ADC). ADC calculates the distance between the query vector and a PQ code by summing up the distances between each subvector of the query and the corresponding centroid of the PQ code. This approach avoids the need to decompress the PQ codes and reduces the memory access cost. Some key parameters of IVFPQ include: - The number of coarse clusters, which determines the number of inverted lists created and the number of clusters searched in each query. A larger number of clusters results in finer-grained partitioning but also increases computation and memory overhead. - The number of subvectors $M$ and the number of centroids $K$ in PQ, which determine the level of compression achieved and the amount of quantization error introduced. A larger $M$ or $K$ leads to higher accuracy but also increases storage and computation costs. - The number of nearest clusters to search for each query, which determines the number of inverted lists scanned and the number of PQ codes compared. A larger number yields higher recall but also increases computation cost. We determine the optimal parameters of IVFPQ for different datasets using the automatic parameter adjustment tool provided with FAISS. Due to compression errors, the recall rate of IVFPQ saturates at a certain value (e.g., ~80%), as widely observed in the literature [7] (refer to Fig. 2(a) of [7]), [8] (refer to Fig. 1 of [8]) and empirically verified in our experiments (Fig. 6(a)-(c) of the main paper). We would like to highlight that increasing the number of probes in IVF does not eliminate this limitation. Even when scanning all coarse clusters, the recall rate remains low. This limitation is a consequence of approximate distance computation. Thank you again for bringing up these points, and I hope this explanation clarifies the behavior of FAISS and its limitations in achieving higher recall rates. **Q1: High Diversities of Attributes** Thank you for your comments. It is indeed true that challenges arise as the number of attributes increases. To test the impact of varying diversities of attributes, we conduct QPS-Recall comparisons across different attribute dimensions {3, 6, 9}, as shown in Fig. 3 of the attached PDF file. Notably, the corresponding number of attribute combinations (represented by $z$) is {36, 972, 26244}. We find that increasing the number of attributes can pose challenges for both the NHQ and two-stage (e.g., ANNS+AF) frameworks. Here are some observations from our experiments: - Decreased QPS with increased number of attributes. The QPS decreases for all methods, including both NHQ and two-stage frameworks, as the number of attributes increases. For instance, when the number of attributes is 36, 972, and 26244, the QPS of the Vearch method is 1852, 725, and 106, respectively, for the same Recall@10 of 0.95. This decline in QPS highlights the fact that handling a larger number of attributes necessitates additional computational resources. - Higher speedup of NHQ over two-stage framework. NHQ demonstrates a higher speedup over the two-stage framework when dealing with a large number of attributes, despite the increased attribute vector distance computation cost in NHQ. For example, when the number of attributes is 36, 972, and 26244, with a Recall@10 of 0.95, NHQ achieves speedups of 9.0x, 17.2x, and 16.5x over Vearch, respectively. This indicates that NHQ is more efficient in scenarios with a large number of attributes compared to the two-stage framework. It's important to note that although NHQ may experience increased attribute vector distance computation costs, the two-stage framework is also affected by higher feature vector distance computation costs required to generate more intermediate results for attribute filtering. However, the feature vector distance computation cost is typically significantly higher than the attribute vector distance computation cost. As a result, the two-stage methods are more likely to be constrained by the number of attributes compared to NHQ. In summary, increasing the number of attributes introduces challenges for both NHQ and two-stage frameworks, leading to decreased QPS. However, NHQ demonstrates higher speedup over two-stage framework in scenarios with a large number of attributes, indicating its potential advantage in handling diverse attribute sets efficiently. --- Rebuttal Comment 1.1: Comment: Thanks for the authors detailed answers. The ~80% recall plateaus of IVFPQ is most likely a consequence of the number of subspaces and codebook size configuration. Although it is not a major concern to me, I don't think it is a fair comparison and cannot agree that its already the optimal parameters for IVFPQ. In particular the memory cost of the proposed method is much higher than IVFPQ. --- Reply to Comment 1.1.1: Title: Response to Reviewer 7xuq Comment: Thank you for your comments. We address each of your concerns in the following section. **IVFPQ Recall Plateau Issue** We agree that the ~80% recall plateau of IVFPQ is due to the configuration of the number of subspaces and codebook size. The trade-off in IVFPQ involves search accuracy, efficiency, and space cost. A larger number of subspaces and codebook size can improve search accuracy but decrease search efficiency. For instance, when the number of subspaces and the codebook size are the largest (referred to as IVFFlat in Faiss), we can achieve a recall rate of 100% by probing all clusters in IVF. However, this configuration may be slower than brute force search (refer to Fig. 6 in VLDB'19). Such low efficiency is impractical, so most research mainly focuses on the trade-off between search efficiency and accuracy in IVFPQ (VLDB'19, TPAMI'21, etc.). To achieve high efficiency, we set an appropriate number of subspaces and codebook size (not necessarily the largest possible) during the offline phase. We note that both the number of subspaces and codebook size are offline parameters. In the online search phase, we can only adjust the number of nearest clusters to probe for each query to achieve higher recall. Therefore, the recall rate of IVFPQ saturates at a certain value (which may vary across datasets) due to compression errors in the offline phase. We would like to clarify that we determined the optimal parameters for IVFPQ on different datasets using the automatic parameter adjustment tool provided by Faiss. Our evaluation indicates that the output parameters closely align with the optimal parameters obtained through grid search in most cases. Similar parameter settings have also been used in other literature (NeurIPS’20, NeurIPS’22, etc.) to evaluate IVFPQ. **Fair Comparison Issue** We acknowledge that the parameters for IVFPQ in our evaluation may not be optimal for achieving the highest recall or speedup. However, our paper focuses on the trade-off between accuracy and efficiency, not just accuracy or efficiency alone. Therefore, we configure the parameters to achieve the best trade-off for all methods, following the setting of related works (VLDB’19, NeurIPS’22, SIGMOD’23, etc.). We agree that NHQ, our proposed method, has a higher memory cost than IVFPQ when considering their best trade-offs. This is a common drawback of graph-based methods compared to PQ-based methods in the ANNS community (VLDB’19, NeurIPS’19). This is mainly because graph-based methods (such as HNSW) build an extra proximity graph index (stored as an adjacency list) to speed up the online search process (please see our analysis in the response to reviewer MN6o). However, graph-based methods are an order of magnitude more efficient than PQ-based methods in terms of queries per second (QPS) for a given recall, and this efficiency gap increases with data size (WWW’23). Therefore, while graph-based methods have a higher storage cost, they achieve a significantly better trade-off between accuracy and efficiency and have become the mainstream algorithms in most vector databases (such as Milvus, Pincone, AnalyticDB-V). Our NHQ framework enhances the ability of current graph-based methods to handle ANNS + AF. Thank you again for your constructive comments. We will add a discussion on the limitation of storage cost for NHQ. We believe there is still room for improvement in balancing storage and search performance (including accuracy and efficiency), and we plan to explore this in our future work. **Reference:** VLDB'19: Fu, et al. Fast approximate nearest neighbor search with the navigating spreading-out graph. TPAMI'21: Fu, et al. High dimensional similarity search with satellite system graph: efficiency, scalability, and unindexed query compatibility. NeurIPS’20: Ren, et al. HM-ANN : Efficient billion-point nearest neighbor search on heterogeneous memory. NeurIPS'22: Chern, et al. Tpu-knn: K nearest neighbor search at peak flop/s. SIGMOD'23: Gao, et al. High-dimensional approximate nearest neighbor search: with reliable and efficient distance comparison operations. NeurIPS'19: Jayaram Subramanya, et al. Diskann: Fast accurate billion-point nearest neighbor search on a single node. WWW'23: Gollapudi, et al. Filtered-DiskANN: Graph algorithms for approximate nearest neighbor search with filters.
Summary: In this paper, authors tackle the problem of retrieving nearest neighbor items under constraints on attributes of the retrieved items. Each item is described by a feature vector and a set of discrete attributes. Authors propose to use a distance function that uses weighted combination of feature vector based distance and discrete attribute based distance. To support efficient retrieval for a given query and attribute constraints, authors use graph-based nearest neighbor search indices. The proposed hybrid distance function is used to build the graph in an offline step and to navigate the graph during test-time search. Authors also propose two heuristics to improve graph construction and test-time graph navigation, and overall the proposed approach yields improvement over baselines. Update: I am leaning towards accepting the paper and have updated my rating from `5: borderline accept` to `7: accept` after reading clarifications from the authors. Strengths: - The proposed idea of using a hybrid distance function (which combines feature vector and attribute based distances) provides significant improvement over two-stage inference pipelines that either retrieve based on feature vectors and then filter based on attributes or first filter based on attribute and then search over filtered items using feature vector. - The proposed approach outperforms popular baselines on a variety of datasets. Weaknesses: - Some missing baselines/ablations - The three main contribution of the paper are - a) New distance function for hybrid queries. - b) New algorithm for constructing graph over items. - b) New heuristic for efficiently navigating the graph at test-time. - The experiments clearly show the advantage of using the hybrid distance function over two-stage search (which separately performs nearest neighbor retrieval based on feature vector and then filters based on attributes). - But individual contribution of the proposed graph construction strategy and the proposed graph navigation strategy is not clear. - The proposed graph construction algorithm should be compared with NGT, HNSW, Munoz et al. (2019) while keeping every other design variable the same i.e. with the same test-time inference as well as with the same distance function. - Similarly, the proposed graph search method should be compared with existing methods for speeding up search such as TOGG (mentioned in the paper), Chen et al., (2023), Munoz et al., (2019). - The presentation of the paper can be further improved. - Most algorithms are described in text but it would help to present them in Algorithm boxes. - While it is okay to use appendix for extra results, theorem proofs etc, I think some important details such as experiment setup, datasets and baseline description has also been moved to appendix. Reading the paper involved too many jumps between the main paper and the appendix. Authors could make the hybrid distance section more concise to make some space or move some extra results to appendix while keeping only main results in the paper. - Proof for theorem 4 is missing *Patrick Chen, Wei-Cheng Chang, Jyun-Yu Jiang, Hsiang-Fu Yu, Inderjit Dhillon, and Cho-Jui Hsieh. 2023. FINGER: Fast Inference for Graph-based Approximate Nearest Neighbor Search. In Proceedings of the ACM Web Conference 2023 (WWW '23)* Munoz, Javier Vargas, et al. "Hierarchical clustering-based graphs for large scale approximate nearest neighbor search." *Pattern Recognition*  96 (2019): 106970. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Performance for NPG_{kgraph} on SIFT1M dataset is reported in Fig 5c, 6c and 7a. But the three graph do not seem to match. - In Fig 5c), does NHQ-NPG_{kgraph} use proposed routing method and do other baselines (NHQ-HNSW and NHQ-NSG) use naive greedy search? - What hyper-parameters were tried for ANNS + AF filtering approach? - The proposed search strategy has two stages S1 and S2. How does transition from S1 to S2 happen? - Complexity in Sec 4.1 : If C(u_i) is selected through an additional index then it will not be a constant time operation. Also in the final results, is C(u_i) selected randomly or using some additional index (and what index)? - Some suggestion of writing and presentation - Including gridlines in the plot can help reading the graph. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No discussion Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Missing Baselines/Ablations** We appreciate your suggestions and include the missing ablations to validate our edge selection and routing strategies. * Graph construction strategy We compare our edge selection strategy with four existing ones: NGT [9], HNSW [6], HCNNG [10], and NSG [11] on SIFT1M and GIST1M datasets using a recent evaluation framework [5]. All competitors use the same test-time inference setup and distance function. We measure the Speedup-Recall metric, where Speedup is relative to brute force. Fig. 2 (a) and (b) in the attached PDF show that our strategy outperforms the others. For example, at Recall@10=0.9 on SIFT1M, our strategy achieves 1.1x, 1.4x, and 25.9x speedup over HCNNG/NSG, HNSW, and NGT, respectively. We also compare the index construction time and size of our strategy with the others in Tab. 3 of the PDF. Our strategy is faster and smaller than the others, demonstrating its superior efficiency and cost. * Graph navigation strategy We compare our graph navigation strategy with three existing ones: TOGG [12], FINGER [13], and HCNNG [10] within the HNSW index framework with consistent parameters. Tab. 4 in the PDF shows the speedup of each optimized strategy over the original HNSW at Recall@10 = 0.9 on SIFT1M. All optimized strategies are faster than the original HNSW, indicating the benefit of optimizing graph navigation. Our strategy has the lowest storage cost among the optimized strategies, as it does not require extra structures unlike the others. For instance, FINGER's storage cost is 3x higher than the original HNSW. This confirms the efficacy of our strategy, which improves search performance and minimizes storage requirements. **W2: Presentation Issue** Thank you for your suggestions. We'll move the pseudocodes of the composite index and joint search from the appendix to the main text and reorganize Section 3 for better clarity. Additionally, we'll include the pseudocodes of our edge selection and routing in the final version. We appreciate you pointing out the issue of poor readability due to important details being placed in the appendix. To address this, we'll move some crucial details (such as the experiment setup, datasets, and baseline description) into the main paper. Furthermore, we'll improve the hybrid distance section to make it more concise, allowing us to add these important details. Additionally, we'll transfer extra results from our experiments to the appendix, while retaining only the main results in the paper. We have double checked our appendix, and the proof for theorem 4 can be found in Appendix L (bottom of page 18 in our appendix). **Q1** In Fig. 5(c) and 6(c), we present the performance of the hybrid query (HQ) on the SIFT1M dataset. The difference in the scales of the axes makes the performance curves of NHQ-$NPG_{kgraph}$ appear mismatched, but they are actually identical in both figures. In Fig. 7(a), we report the performance of ANNS on SIFT1M. Therefore, it is natural that the QPS-Recall curve for $NPG_{kgraph}$ in Fig. 7(a) does not match the QPS-Recall curves for NHQ-$NPG_{kgraph}$ in Fig. 5(c) and 6(c). **Q2** Yes, in Fig. 5(c), NHQ-$NPG_{kgraph}$ utilizes the proposed routing method, while the other baselines (NHQ-HNSW and NHQ-NSG) employ naive greedy search. We'll clarify this point in the revision. **Q3** In the ANNS+AF filtering approaches, there are two types of hyperparameters involved. The first type pertains to the parameters specific to the ANNS methods used. We determine the optimal parameter configuration for these methods by referring to their respective papers or repositories (if provided) or by performing a grid search (if not provided). For example, parameters such as the maximum number of neighbors ($M$) and the size of the candidate set when selecting neighbors ($ef_{construction}$) are considered for HNSW. The second type of hyperparameter relates to the intermediate result size ($C$), where the intermediate results are obtained from the ANNS methods. We apply the AF on these intermediate results to produce the final hybrid query results. Thus, $C$ impacts both search efficiency and accuracy. Since the optimal $C$ value may vary depending on the dataset and queries, it's challenging to predict it in advance. Therefore, we conduct a grid search to determine the appropriate $C$ before conducting the query tests. It's important to note that we generate different QPS-Recall pairs by adjusting the search candidate size of the ANNS methods. For example, we modify the parameter $ef_{search}$ in HNSW to yield different QPS-Recall trade-offs. We'll include the hyper-parameter discussion of all methods in our appendix. **Q4** The transition from stage S1 to S2 in the proposed routing strategy occurs when stage S1 reaches a local optimum, indicated by the candidate set $R$ not being updated. In stage S2, we proceed to update the candidate set $R$ by checking all neighbors of the visited vertex. The search process terminates when the candidate set $R$ no longer receives any further updates. We'll add a pseudocode of our routing strategy to clarify it in the revision. **Q5** It's correct that if $C(u_i)$ is selected through an additional index, it will not be a constant time operation. We highlight that the complexity analysis in Section 4.1 of our paper is on the basis of a given $C(u_i)$. The total complexity analysis should consider the time complexity of obtaining $C(u_i)$. In the final version, we'll make sure to clarify this point. As stated in Section 4.3 of our paper, our edge selection process relies on two basal proximity graphs: NSW and KGraph. We obtain $C(u_i)$ using these two graph indexes (please refer to Appendix M for more details). The time complexity of obtaining $C(u_i)$ is $O(|V| \cdot \log^2(|V|))$ and $O(|V|^{1.14})$ for NSW and KGraph [5], respectively. **Q6** Thank you for the suggestion. We'll incorporate gridlines in all the plots to enhance readability. --- Rebuttal Comment 1.1: Title: Acknowledging author response and updating score Comment: I have read the author response and my concerns have been addressed. Having skimmed through other reviews and corresponding author responses, it looks like the authors have addressed major weaknesses pointed out by other reviewers. I am leaning towards accepting the paper and have updated my rating from `5: borderline accept` to `7: accept`. I would encourage authors to add these additional results to the paper (in the appendix perhaps) and improve the presentation of the results. The main set of results which answer key research questions can be in the main paper while other exhaustive set of results can be moved to the appendix. --- Reply to Comment 1.1.1: Title: Thanks! Will Add Additional Results. Comment: Thank you for your prompt reply! We are glad that our responses helped clarify things. We are also grateful for your updated rating and the valuable feedback provided. We will include the additional results and enhance the presentation of our findings as you suggested.
Summary: The paper discusses how hybrid query finds objects that are both similar to a feature vector and match some structured attributes. However, existing methods handle ANNS and attribute filtering separately, leading to inefficiency and inaccuracy. The paper proposes a new efficient and robust framework called native hybrid query (NHQ) and two new navigable PGs (NPGs) with optimized edge selection and routing, which improve the overall ANNS performance. Strengths: 1. Optimized edge selection and routing are proposed which is efficient for ANNS problems. 2. The authors perform a sufficient complexity analysis of the proposed method. This helps readers understand the superiority and limitations of the proposed method. 3. Many experiments have been conducted. The authors have conducted experiments on multiple datasets to show their superiority in terms of accuracy, efficiency, and memory usage. Weaknesses: 1. Compared to PQ-based methods, the PGs need more storage in runtime. Thus, a discussion on storage cost theoretically and experimentally of NHQ with PQ-based methods is necessary. There is still an improvement space on trade-off strategy on storage and efficiency. 2. More related works (HQ-based methods and edge selection strategies) are needed. 3. Lack of results of NHQ without edge selection strategy in ablution study. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Discuss the storage cost theoretically and experimentally of NHQ with PQ-based methods. 2. Include results of NHQ without edge selection strategy. This comparison can help readers understand the strengths of the proposed edge selection strategy. 3. Include more related works (HQ-based methods and edge selection strategies). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to Paper Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1, Q1: Storage Cost Analysis** We appreciate your suggestions and agree that comparing the storage cost of NHQ and PQ-based methods is necessary. * Theoretically NHQ and PQ-based methods have the same attribute storage cost, so we only analyze their feature vector storage cost. PQ-based methods compress high-dimensional vectors into the Cartesian product of multiple sub-codebooks. Let a vector $x$ be split into $M$ sub-vectors $u_j$, $1\leq j \leq M$, of dimension $D^*=D/M$, where $D$ is a multiple of $M$. The sub-vectors are quantized separately using $M$ quantizers with $K$ centroids each. We need to store the $M \times K$ centroids, i.e., $KMD^* = KD$ floating-point values ($4KD$ bytes). Each vector is compressed as the code length $L=M\log_2K$ ($L/8$ bytes). The storage cost of PQ on $N$ vectors is: $4KD + NL/8$ bytes. NHQ builds a graph index for $N$ vectors with at most $R$ neighbors each. We use the adjacency list (a vertex with $R$ neighbor ids) to store the index, costing $4R$ bytes. We also store each raw vector with $ZD$ bytes. The storage cost of NHQ on $N$ vectors is: $ZND + 4NR$ bytes. On SIFT1M dataset, where $D=128$, $Z=4$, PQ sets $K=256$, $M=32$, costing 32131072 bytes (31MB). NHQ sets $R=20$, costing 592000000 bytes (565MB). Note that PQ is often combined with IVF, known as IVFPQ, which adds a coarse step to find probing centroids near the query. This increases the storage cost of PQ-based methods depending on specific optimizations. * Experimentally We compare the storage cost of NHQ and PQ-based methods on eight datasets in Tab. 2 of the attached PDF. We observe that: (1) NHQ costs more than PQ-based methods, consistent with our theory; (2) different PQ-based methods have different costs due to extra structures for optimization; (3) the same method has different costs on different datasets due to different optimal parameter configurations. Notably, PQ-based methods have low accuracy due to compression loss. As shown in [7] [8] and our experiments (Fig. 6 (a)-(c) of our paper), PQ-based techniques sacrifice accuracy (<0.8 on most datasets) for cost saving. In contrast, NHQ achieves close to 1 recall with high efficiency. Therefore, we believe that there is room for improvement in the trade-off between storage and search performance. We plan to explore this in our future work. We'll add this analysis to our final version. **W2, W3, Q2, Q3: Missing Baselines/Ablations** We appreciate your suggestions and have added more related works and evaluations on HQ-based methods and edge selection strategies. - HQ-based methods We noticed that Filtered-DiskANN [3] was released after our submission. Before that, we had considered all related work on hybrid queries and compared all SOTA methods. Filtered-DiskANN proposes optimizations called FilteredVamana and StitchedVamana on DiskANN. FilteredVamana connects vertices with shared attributes. StitchedVamana builds separate graph indexes for each filter and overlays them. While these optimizations enhance performance, Filtered-DiskANN's limitation lies in its inability to handle queries with multiple attributes. In contrast, NHQ supports any attribute combination in a query. We compare NHQ and Filtered-DiskANN on SIFT1M dataset, considering vectors with 3 attribute types. To execute Filtered-DiskANN, we only test single-attribute queries for all competitors. To eliminate other factors, we implement NHQ on DiskANN (NHQ can be easily extended to the current graph index), named NHQ-DiskANN. We keep the same parameters in DiskANN. The results are in Fig. 1 of the PDF. NHQ-DiskANN outperforms Filtered-DiskANN consistently, in both memory and disk versions. Upon analysis, we observe that SIFT1M has up to 180 attribute combinations, which may challenge Filtered-DiskANN in building a high-quality graph index. Additionally, when # attributes is large, StitchedVamana will build many graph indexes, increasing the indexing cost. Moreover, FilteredVamana only considers one matched attribute between a vertex and its neighbors, which also limits its application in complex attribute combinations. - Edge selection strategies We add more related works on edge selection strategies, including NGT [9], HNSW [6], HCNNG [10], and NSG [11]. NGT: It builds a KNNG by incrementally inserting vertices and obtaining their nearest neighbors through a greedy search. It then optimizes the distribution of neighbors for each vertex using an effective path adjustment strategy. HNSW: It generates a hierarchical graph, where the vertices on the upper-level graph are a subset of the lower-level graph. It not only selects the nearest neighbors for an inserted point but also considers the distribution of neighbors using a heuristic edge selection strategy. HCNNG: It divides the dataset into multiple hierarchical clusters, where all points in each cluster are connected through a minimum spanning tree. NSG: It deploys an edge selection strategy based on the monotonic relative neighborhood graph. It prunes edges by searching for candidate neighbors on a KGraph, which is a KNNG. We compare our edge selection with the above four strategies on SIFT1M and GIST1M. To ensure a fair comparison, we implement our edge selection on a recent evaluation framework [5], which has deployed all above four strategies. Notably, all competitors use the same routing strategy and distance function. Fig. 2 (a) and (b) in the PDF show our strategy outperforms all other competitors for the Speedup-Recall trade-off. Our strategy also demonstrates more efficient index construction and smaller index size in Tab. 3 of the PDF. Additionally, in Fig. 2 (c) and (d) of the PDF, we evaluate indexes with and without our edge selection strategy, keeping other design variables the same. The results show that our edge selection brings significant performance gains in both the context of HQ and ANNS. We'll summarize the above analysis and include the main results in the revision.
Summary: Attribute filtering (AF) is an important part of many scenarios using nearest neighbor search. Here, each data points has a feature vector in a geometric space and also a set of attributes (e.g., data, author) and queries must be matched to nearest vectors satisfying some attribute constraints. While many algorithms have been studied for the classic ANNS problem, ANNS + AF is hard and needs new algorithms. Recently, there has been a flurry of attempts at this. Some of the basic approaches include filtering results of classic ANNS (which tends be yield poor results) or building separate indices for each attribute (Which leads to duplication). This paper proposed that a better way to address this might be to create a fusion distance that incorporates geometric distance between feature vectors and suitably normalized similary score between attribute vectors. They then argue that a proximity graph data structure can be built using this distance. And that this performs better than other baselines selected in the paper. Strengths: The authors identify and articulate an important problem. Empirical comparisons are made with many baselines. Weaknesses: There is no description of the dataset design in the main section. In appendix O, the datasets are described as usual vector datasets with 3 attributes (e.g., date, location, size) It looks like each vector can only have one possible combination of the attributes. So one can model it is one attribute dimension (cross-product) In such a case, why it not be easier and faster to build separate indices for each possible choice of attributes? Isnt complex index design only needed when datasets have multiple labels with an attribute dimension? In any case, it is impossible to evaluate the algorithm without well motivated datasets. Missing baseline: filter-diskann [www'23], methods therein and equivalent code (in-memory and on-disk) are not compared. Instead weaker baselines are compared. There is far too much greek notation. The notation can be greatly simplified for better readability. Simple pseudocode would also make for much better reading. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In preliminaries, can you clearly state the attribution filtering problem and the defintion of recall for attribute filtering problem? It is unclear how the attribute vectors or indexed data and queries are matched. Please describe the datasets clearly in the main section. Please describe the encoding l(.) function for your datasets. why are proximity graphs (PG) and not navigation graphs (e.g., NSG, HNSW) not the starting points for your construction. Proximity graphs tend to be disconnected even for non-AF use cases. Would like to see some analysis of connectivity of PG on AF-datasets. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Dataset Issue** We appreciate your comments. Our datasets are diverse in size, # attributes, dimensions (feature and attribute vectors), and domains (image, video, etc.). Each vector has 3-9 attribute types with various values, common in real-world scenarios [1] [2]. A data point with 3 attribute types (date, location, size) may have a value for each type. We agree that 3 values form one combination. But we consider not only the full combination of 3 values, but also partial combinations of 2 or 1 values. This leads to many combination cases (up to 26244 in our dataset). Some queries seek the nearest vectors that match partial attribute constraints (e.g., date regardless of location or size). Modeling the combination as a single attribute dimension is infeasible for such queries. Building separate indexes for each combination is also time-consuming [3] and impractical for queries with partial constraints. Hence, we designed a composite index that can handle diverse queries based on fused distances (it computes the preferred attribute dimensions while ignoring the unconstrained ones). We’ll clarify the dataset design in the main text. **W2: Missing Baselines** Thanks for your suggestions and sorry for the missing baselines. We noticed that Filtered-DiskANN [3] was released after our submission. Before that, we had considered all related work on hybrid queries and compared all SOTA methods. Filtered-DiskANN proposes two optimizations based on DiskANN: FilteredVamana and StitchedVamana. FilteredVamana connects vertices with shared attributes. StitchedVamana builds separate graph indexes for each filter and overlays them. These optimizations improve performance significantly. However, Filtered-DiskANN only supports single-attribute queries. This limits its applicability in scenarios requiring multiple attributes. For example, in product search, users may input a query image with color and size filters. Filtered-DiskANN cannot handle such cases. In contrast, NHQ supports any attribute combination in a query. We now add a comparison between NHQ and Filtered-DiskANN on SIFT1M dataset. Each vector has 3 attribute types, with 6, 6, and 5 values respectively. To execute Filtered-DiskANN, we only test single-attribute queries for all competitors. To eliminate other factors, we implement NHQ on DiskANN (NHQ can be easily extended to the current graph index), named NHQ-DiskANN. We keep the same parameters in DiskANN. The results are in Fig. 1 of the attached PDF. NHQ-DiskANN outperforms Filtered-DiskANN, in both memory and disk versions. Upon analysis, we observe that SIFT1M has up to 180 attribute combinations, which may challenge Filtered-DiskANN in building a high-quality graph index. Additionally, when # attributes is large, StitchedVamana will build many graph indexes, increasing the indexing cost. Moreover, FilteredVamana only considers one matched attribute between a vertex and its neighbors, which also limits its application in complex attribute combinations. **W3: Presentation Issue** Thank you for your suggestions to enhance readability. We'll use simpler symbols instead of Greek notation and add more pseudocodes. We'll also move the pseudocodes of the composite index and joint search from the appendix to the main text. Moreover, we'll provide the pseudocodes of our edge selection and routing. **Q1** Thanks for raising this point. The AF problem involves an object set $C$ and a query object $q$ with attributes $a_1,\cdots,a_m$ of size $m$. The goal is to find objects $G$ in $C$ that share the same attributes as $q$. For any $e\in G$, it holds $\forall i=1,2, \cdots, m, e . a_{i}=q . a_{i}$, where $e . a_{i}$ is the value of attribute $a_i$ of $e$. Note that $q$ may have partial attribute combinations (see W1). In this case, $e$ can have any value for the unconstrained attribute types. For the ANNS+AF problem, we also require the feature vector of $e$ in $G$ to be closest to that of $q$, resulting in the ground-truth set $G^*$ of ANNS+AF. Hence, the recall for ANNS+AF is $\frac{|R\cap G^*|}{k}$, where $R$ is the result set from an ANNS+AF method, and $k$ is the result size. However, recall for AF alone is unnecessary, as AF requires all results to match $q$'s attribute constraint, leading to a recall of 1. To illustrate attribute matching, consider an example with 2 attribute types: date and city. Suppose we have an object $e$ with $e.date=2023$ and $e.city=New\ York$, encoded as an attribute vector [1,3]. When the attribute vector of a query $q$ is [1,3] or [1,null] (where 'null' means no constraint), the attribute distance between $e$ and $q$ is 0, indicating a perfect match. However, if $q$ is [2,1] or [null,1], the attribute distance is 2 since the attributes of $q$ and $e$ are mismatched. We'll clarify AF in preliminaries. **Q2** Thanks for your suggestions. We follow previous works [4] and use ordinal encoding (i.e., our l(.)) to obtain attribute vectors for all datasets. For example, “New York” is encoded as 1 and “Beijing” as 2. Note that l(.) can use other encoders such as one-hot encoding. We'll clarify the datasets and l(.) in the main text. **Q3** Most navigation graphs diversify the neighbors on a base graph. For example, NSG and HNSW implement their edge selection based on KGraph and NSW, respectively, and achieve SOTA performance for ANNS [5]. Similarly, we build our navigation graph based on KGraph and NSW, using our edge selection. The results show that our strategy outperforms NSG and HNSW (see Fig. 2 of the PDF). We evaluate the connectivity of PG on both AF-datasets and non-AF-datasets (see Tab. 1 in the PDF). The results show that PG has similar connectivity in both cases. We know that KNNG has poor connectivity due to the cluster characteristics of non-AF-datasets [5]. PG diversifies the neighbor distribution and connects clusters [6]. For AF-datasets, vectors in different clusters may have the same attributes, which helps connectivity in NHQ.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all four referees for providing us with valuable suggestions regarding the presentation and experimental studies. These suggestions have been immensely helpful in enhancing the quality of our paper. In response to the major concerns raised, we have conducted additional key experiments and included the main results in the attached PDF file. Furthermore, we will carefully revise the methodology and experiment sections to enhance their readability in the final version. Please refer to the detailed responses provided below for each individual reviewer. Due to space limitations, we put all references in here. **References** [1] VLDB'20: Wei, et al. Analyticdb-v: A hybrid analytical engine towards query fusion for structured and unstructured data. [2] SIGMOD'21: Wang, et al. Milvus: A purpose-built vector data management system. [3] WWW'23: Gollapudi, et al. Filtered-DiskANN: Graph algorithms for approximate nearest neighbor search with filters. [4] TPAMI'15: Wang, et al. Exploring local and overall ordinal information for robust feature description. [5] VLDB'21: Wang, et al. A comprehensive survey and experimental comparison of graph-based approximate nearest neighbor search. [6] TPAMI'18: Malkov, et al. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. [7] NeurIPS'19: Jayaram Subramanya, et al. Diskann: Fast accurate billion-point nearest neighbor search on a single node. [8] NeurIPS'20: Ren, et al. HM-ANN : Efficient billion-point nearest neighbor search on heterogeneous memory. [9] SISAP'16: Iwasaki, et al. Pruned bi-directed k-nearest neighbor graph for proximity search. [10] PR'19: Hierarchical clustering-based graphs for large scale approximate nearest neighbor search. [11] VLDB'19: Fast approximate nearest neighbor search with the navigating spreading-out graph. [12] KBS'21: Xu, et al. Two-stage routing with optimized guided search and greedy algorithm on proximity graph. [13] WWW'23: Chen, et al. FINGER: Fast inference for graph-based approximate nearest neighbor search. Pdf: /pdf/6395f892990a5011f80631af788d8cef22487ea9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning a 1-layer conditional generative model in total variation
Accept (poster)
Summary: The paper investigates the sample complexity of learning conditional generative models without assumptions on the input distribution. It applies the Maximum Likelihood Estimator (MLE) to linear regression and 1-layer networks with ReLU activation. The results show that the MLE achieves small total variation error with sample complexities of $O(k/\epsilon^2 log(1/\epsilon))$ for linear regression and O((kd + d^2) / \epsilon^2 log(kd\kappa/\epsilon)) for 1-layer networks. The paper also discusses the extension to multilayer networks, given access to the internal activations. The results suggest that MLE is a promising approach for learning feed-forward generative models from limited samples, though the authors did mention that the computational aspects of the optimization problem are not thoroughly analyzed in the paper. Strengths: This paper provides a solid theoretical foundation for understanding the sample complexity of learning multi-layer ReLU networks using MLE method. The derived bounds do not make assumptions on the distribution of X or the condition number of W, and achieve a sample complexity polynomial in the system parameters. The developed algorithm and theories show considerable improvement over those in the existing literature. Though there are limitations that are also noted by the authors, I think the paper is innovative and provides interesting insights into efficient learning of conditional generative models. Moreover, the paper is well-written and presents its concepts and results in a clear and concise manner. Weaknesses: The weakness and limitations of the paper are well noted and discussed by the authors. 1. To extend the theory to multilayer neural networks, it requires access to intermediate activations, which is impractical 2. It assumes that the learner has an understanding of the model architecture, which might not be the case in practice 3. It might be challenging to perform MLE on some of the models being considered, such as neural networks, and the computational aspects of the optimization problem are not thoroughly examined in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the practical implications of the sample complexity bounds derived in the paper? How do these bounds impact the feasibility and scalability of learning generative models in real-world applications? I understand that the paper is limited by space restrictions, but can you provide some insights into how to go about addressing the requirement for accessing intermediate activations in order to apply the proposed algorithm to solving deep neural nets? Is it going to be a deal breaker for the practical applicability of the proposed method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback, we are delighted that you find our results to be a solid theoretical foundation and a considerable improvement over existing literature. Please find below our response to your questions: **Question 1:** "What are the practical implications of the sample complexity bounds derived in the paper? How do these bounds impact the feasibility and scalability of learning generative models in real-world applications?" An important part (perhaps the most important part) about our complexity bounds is that they do not depend on the input distribution $x$. This means that our result can be composed for learning multi-layer networks. but also, this means that learning a one layer ReLu network in TV, which is in itself a valid and potentially useful statistical framework is possible with a sample complexity independent of the input distribution $x$. In terms of learning actual generative models, this work suggests that learning models in TV is possible even when learning model parameters is not. This might help to explain recent work (https://crfm.stanford.edu/2023/03/13/alpaca.html, https://arxiv.org/pdf/2305.11206.pdf) where researchers are able to essentially clone the behaviour of other language models with a very small fraction of samples when fine-tuning another language model. This may be related to the *superficial alignment hypothesis*, which says that these models only require a superficial modification to align their behaviours. It is clear in this case that the weights are not being learned. **Question 2:** "I understand that the paper is limited by space restrictions, but can you provide some insights into how to go about addressing the requirement for accessing intermediate activations in order to apply the proposed algorithm to solving deep neural nets? Is it going to be a deal breaker for the practical applicability of the proposed method?" This is a very interesting question and one we are actively considering. We think that the approach in Allen-Zhu and Li [1] can actually be used in our paper as well. Allen-Zhu and Li assume that the activations are sufficiently sparse across layers, which means they can use results from sparse-coding to recover these activations given only images at these layers. The main difficulty in directly combining the results in [1] into our algorithm as a subroutine is that they are only able to *approximately* recover the activations -- this would mean there is a distributional mismatch between the true activations and the estimated activations. If this distribution mismatch is in TV distance, then the results in [1] can be straightforwardly incorporated into ours, where the error in the activations will be an additive error in our results. Unfortunately, the error in [1] is wrt the $\ell_2$-norms of the activations -- this means that the true and estimated activations would perhaps only be close in Wasserstein distance. Accommodating this distribution mismatch is an open problem we are exploring, and one that would make our method more practical. References: [1] Allen-Zhu, Zeyuan, and Yuanzhi Li. "Forward Super-Resolution: How Can GANs Learn Hierarchical Generative Models for Real-World Distributions." The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response
Summary: This paper studies conditional distribution learning: given iid samples (x,y) where x ~ D and y ~ p(y|w*,x), the goal is to find some estimate w such that the distributions p(y|w*,x) and p(y|w,x) are close in expectation over x ~ D, or equivalently the learned distribution of (x,y) (where x ~ D) is close to the true distribution. Specifically, this work studies the 1-layer conditional generative model y = max(W* x + eta, 0) where W* is a matrix and eta is a multivariate mean-zero Gaussian with some unknown covariance Sigma*. The main result shows that for an arbitrary covariate distribution D, and arbitrary W*, so long as Sigma* has covariance at most kappa, the sample complexity of MLE (needed to learn up to total variation distance epsilon) is polynomial in the dimension parameters, log(kappa), and 1/epsilon. An straightforward extension to multi-layer networks is also given (under the assumption that intermediate activations are known). Strengths: - This work introduces a (to my knowledge) novel perspective to the problem of learning generative models: distribution learning rather than parameter estimation. This more accurately addresses the problem that actually matters in practice for generative models, and avoids needing to worry about identifiability issues. - Due to the new goal, no distributional assumptions are needed (aside from the tame bound on condition number of the noise). - The paper is well-written, with a toy example of linear regression given for intuition. I did not have time to check the proofs, but the approach and proof sketch seem reasonable. Weaknesses: - As the authors acknowledge, the paper only addresses the statistical question. It's claimed that the MLE is concave; however, I could not find a proof of this fact in the paper. Moreover, looking at the log-likelihood function (8) I do not see why it should be concave. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback, we are delighted that you find our problem novel and the results significant. Please find below our response to your concern regarding our claim that the MLE is concave. **Question 1:** "It's claimed that the MLE is concave, however, I could not find a proof" We apologize, due to an editing error we neglected to provide a proof of this in the supplementary. The proof actually follows as a straightforward result of the following fact. The objective in Eqn (8) is an integral of two log-concave functions: (i) the indicator function over the negative orthant, and (ii) the Gaussian likelihood. As (i) and (ii) are log-concave functions, their integral in Eqn (8) is also log-concave. Log-concavity being closed under integration is a non-trivial fact -- for a concrete reference, please refer to Page 106 in Boyd and Vandenberghe (https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf), in particular, Examples 3.42, 3.43, and 3.44. --- Rebuttal Comment 1.1: Title: Confused Comment: I'm confused: isn't $\Sigma$ also unknown? I get that the Gaussian likelihood is log-concave in $W$. But is it log-concave jointly in $W$ and $\Sigma$? --- Reply to Comment 1.1.1: Title: Follow-up Comment: Thank you for following up! You are correct to suggest that the objective is not concave in the standard mean-covariance parameterization. There is, however, a simple, easily invertible transformation of these parameters under which the objective is concave. Let $U = \Sigma^{-1}$ and $v = \Sigma^{-1}Wx$ (the "natural" parameter space we discuss in Appendix E). To simplify things slightly, as we did in Appendix E.3.2, we have made the conditioning on $x$ implicit in our parameters. The un-truncated density is a multivariate normal, and is thus written as an exponential family in this natural parameter space*: \begin{equation*} p_{U,v} \left( y | x \right) = \exp \left( -\frac{1}{2}y^TUy + y^Tv - A(U,v)\right), \end{equation*} where $A(U,v)$ is the cumulant function (note this is distinct from the related cumulant generating function). A well known result [1] is that $A$ is jointly convex in $U$ and $v$. Taking logs and using this fact, shows us that $p_{U,v} \left( y | x \right)$ is log-concave in $U,v$. Our truncated density is simply: \begin{equation*} f_{U,v} \left( y | x \right) = \int_{y_S \leq 0} p_{U,v} \left( y | x \right)dy_S, \end{equation*} As we mentioned in our original response, for any log-concave density $f(x)$, integration over a convex subset of the coordinates preserves log-concavity. Thus the objective is log-concave. We hope this clears up your confusion. *A few more steps going between the standard and normal parameters: \begin{eqnarray} p \left( y | x \right) &=& \exp\left(-\frac{1}{2}\left( y - Wx \right)^T \Sigma^{-1}(y - Wx) - \frac{1}{2}\log\mid 2\pi \Sigma \mid \right),\\\\ &=& \exp\left(-\frac{1}{2} y^T\Sigma^{-1}y + x^TW^T\Sigma^{-1}y - \frac{1}{2} x^TW^T\Sigma^{-1}Wx - \frac{1}{2}\log\mid 2\pi \Sigma \mid \right) \\\\ &=& \exp\left(-\frac{1}{2} y^TUy + v^Ty - \frac{1}{2} v^T U^{-1}v - \frac{1}{2} \log\left({\left(2 \pi\right)^n} / \mid U \mid \right)\right) \\\\ \end{eqnarray} By definition [1] the part of the density in the exponential that does not depend on $y$ is the cumulant function. **References:** [1] Jordan, M. Stat 260 *The Exponential Family: Basics* (https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/other-readings/chapter8.pdf)
Summary: In this paper, the authors consider the problem of linear regression and ReLU applied to linear regression. The goal is to recover the weight vector such that the resulting distributions are close as opposed to recovering the weight vector under certain norms such as $\ell_2$ which has been thoroughly studied. For the linear regression, the authors show that the MLE estimator learns the distribution in total variation distance. For the ReLU regression, the authors show again the MLE estimator works when the covariance matrix of the noise is well-conditioned. The resulting algorithm is sample-efficient but not time-efficient. Strengths: The problem considered is fundamental and it has important connections to learning practically important problems. Weaknesses: I think the scientific novelty and contribution of this paper is very limited. The proposed results were well-known in the literature in my opinion. As an example, the distributional learning guarantee of linear regression in TV distance was analyzed in [arXiv:2107.10450]. Similarly, one-layer ReLU networks have been analyzed in the works of [Diakonikolas et al, Klivans et al, and Arora et al]. Moreover, the condition number assumption and the high running time of the proposed algorithms are quite restrictive in my opinion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: None, Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are disappointed that you've found no positives in our paper. After reading the other reviews and this rebuttal, please can you let us know if you have any questions that can help us improve our paper? Please find below our responses to your concerns. **Weakness 1:** "The proposed results were well-known in the literature in my opinion. As an example, the distributional learning guarantee of linear regression in TV distance was analyzed in [arXiv:2107.10450]." 2107.10450 considers a Gaussian Bayesian network, where *all* the variables are Gaussian, and they can find parameters that are close to the true distribution over these Gaussian random variables. In our problem, the distribution of $x$ is not Gaussian, and should be thought of as a distribution over Word2Vec embeddings. The conditional distribution of $y|x$ is assumed to be Gaussian, and the final sample complexity is independent of any parameter other than the dimension of $x$. **Weakness 2:** "Similarly, one-layer ReLU networks have been analyzed in the works of [Diakonikolas et al, Klivans et al, and Arora et al]." These are some of the most prolific names in learning theory, and without pointing to specific papers, we have no concrete way of responding to your criticism. Nonetheless, in an attempt to guess your concern, the simplest response is that none of these results can been extended to multi-layer networks, and our assumptions and analysis allow us to consider multi-layer networks. As we make no assumptions on the distribution of $x$, this allows us to compose our one-layer guarantee recursively over multiple layers. The assumption that the conditional distribution per layer is a truncated Gaussian is actually used in practice, for example in StyleGAN [1], where the ``style'' vectors are random Gaussian noise variables with learned mean and covariance. References: [1] Karras, Tero, Samuli Laine, and Timo Aila. "A style-based generator architecture for generative adversarial networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. **Weakness 3**: "the condition number assumption and the high running time of the proposed algorithms are quite restrictive in my opinion" Our sample complexity has a logarithmic dependence on the condition number, which allows for an exponentially large condition number with only a polynomial increase in the sample complexity. We agree that the running time is a concern, but as the optimization problem involves maximizing a concave function over a convex set, we believe a better designed optimization algorithm should be able to achieve a rigorous guarantee. The goal of this work is to show that MLE can be used for *distributional learning* of generative models with a polynomial sample complexity. --- Rebuttal Comment 1.1: Title: response Comment: I have read the rebuttal of the authors. Unfortunately, I am not convinced with their response regarding my points raised. Indeed I am talking about a large body of works over the years by the three co-authors I have mentioned for learning neural networks under different assumptions including Gaussianity. I also believe the conditional number assumption and the exponential running time is quite restrictive. --- Reply to Comment 1.1.1: Comment: This is absurd. Do you have any *specific* papers that you think get comparable guarantees, namely, learning in TV under no distributional assumption on x?
Summary: The article provides complexity bounds for learning the conditional distribution y|x. One of the main novelties claimed by the authors is that the control of the TD distance between the estimated distribution and the ground truth is more meaningful. Thus, they are able to provide bounds independently of the distribution of label x. The article is well written and the flow is quite pleasant. See my comments below for more critical details Strengths: The document is clearly written and the main arguments are transparent and easy to follow (even if they remain fairly technical). The authors did a great job explaining the intuition before getting into formal details. The distribution free result (wrt to label x) is quite remarkable. Weaknesses: My main concern is that the results presented might be difficult to compare to classical results on the subject. Below I describe some of my missunderstanding. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Usually x is the features and y the labels. The notation might be misleading. As a matter of fact the product $x \cdot w^*$ is even weird when the label does not belong to a vectorial space. - On the same flavor, it is difficult to compare with classical results, since the assumptions are usually in the feature space ie distribution assumption on the objet $x$ and the label is generated conditional on $x$ and some noise. It is unclear how the results fit on the same ground. - The lines 32-35 need explicit references and maybe explicit classical bounds to discuss. - The complexity presented is not explicitly compared with (the criticized) classical ones. Hence it makes the contributions harder to appreciates - Theorem 4.1 (and probably) others must states the full assumptions eg $n \geq k/2$ which usually does not hold in high dimensional regime where the number of features is way larger than the sample size. - As a main proof technique, the authors rely on the relation between TV distance and prediction error in the equation below line 159. Mainly $d(\hat w, w^*) \approx |x\cdot \hat w - x \cdot w^*|$. And then we directly fall into the OLS analysis as in https://www.di.ens.fr/%7Efbach/ltfp_book.pdf Chapter 3. Can the authors comments on the main novelties after this. Usually the prediction error does not require much restriction on the design matrix (compared to estimation error control). - In Lemma 4.3, can the authors specify what the probability Pr is? Also, since the (empirical) distance is Lipschitz, can't we deduce this lemma directly from Talagrand inequality? - In which practical settings do we know the ground-truth conditioning number? - Also in Theorem 4.5, $\hat W$ and $\hat \Sigma$ might be non unique. Can the authors comment on that? Some failure of MLE are quite known, for example in Chapter 24.1.3, https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf an explicit overfitting example is shown. Can the authors discuss how their proposition escape such a situation? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed analysis of our paper. We will include your suggestions in future versions. **Question 1:**"Usually $x$ denotes features and $y$ the labels... $x \cdot w^*$ is even weird when the label does not belong to a vector space." We apologize for the confusion -- we used labels in the sense of class labels that would be passed to a text-conditional generative model. In the context of linear regression, $x$ would be feature vectors (such as Word2Vec embeddings) and $y$ would be the data generated. We will change this to reduce confusion. **Question 2:**"On the same flavor, it is difficult to compare with classical results, since the assumptions are usually in the feature space ie distribution assumption on the objet and the label is generated conditional on and some noise. It is unclear how the results fit on the same ground." While it would be nice for comparison to use the same assumptions as classical works as you suggest, the nature of this work is such that this is not possible. A central aspect of this work is to examine this learning framework from a perspective more relevant to modern problems --- specifically in learning generative models. In doing so, we consider new assumptions (1) no distributional assumptions over $x$ and new goals (2) learning distributions in TV. (1) The reason we assume an unknown arbitrary distribution over $x$ is that it allows us to compose our guarantee over multiple layers. For example, if the generative model maps Word2Vec embeddings to images, then at the first layer, the distribution of $x$ would be those of Word2Vec embeddings. Then for the second layer, we can treat $x$ as the output of the first layer. This allows us to compose our guarantee over multiple layers, and hence allows for an expressive distribution over the data. (2) Learning in TV is useful because in problems concerning generative models, we are generally concerned with the output of the model, rather than learning the parameters exactly. In our simulation Section 5, Fig. 3 (a), we provide a particular example where the classical objective fails, while our proposed objective succeeds. **Q3:**"lines 32-35 need explicit references and classical bounds". We will add this, thank you for the suggestion. **Q4:**"The complexity presented is not explicitly compared with (the criticized) classical ones. Hence it makes the contributions harder to appreciates" In Section 4.2, we focused on comparing our results to existing results (Wu et al [1], Allen-Zhu and Li [31]) on generative models. The results in Section 4.1 are to build intuition for our proof techniques, and what needs to change for the ReLU model in Section 4.2. We will add more citations to directly compare our results to existing work in Section 4.1. Ultimately, as mentioned with our answer to question 2, the differences in assumptions and objectives mean that an exact direct comparison is difficult. **Q5:**"Theorem 4.1 ... must states the full assumptions eg $ n \ge k/2$" Thank you for the suggestion, we shall include this. This was implicit in Theorem 4.1 as $\varepsilon < 1$ with a large enough constant $C$, but we will explicitly mention this. We will also mention that these bounds are not for the over-parameterized regime. **Q6:**"What are the novelties over OLS analysis." The analysis in the reference provided by the reviewer has a sample complexity that depends on the design matrix $\Phi$, in terms of trace$[(\frac{1}{n} \Phi^T \Phi)]$. This would introduce a dependency on the $\ell_2$-norm of $x$ in Eqn 6 in our paper, and typical assumptions to deal with this would require bounded moments for $x$. Our sample complexity has no dependence on this design matrix. We avoid this dependence on the $x$ distribution by adopting a similar analysis to Theorem 11.2 in Györfi et al (https://link.springer.com/book/10.1007/b97848), which is relatively simple because $d(\hat{w}, w^*)$ is bounded. For Theorem 4.5, we cannot fall into the OLS analysis for half the proof, and we use the Györfi approach twice. The second time is more challenging because the variables we concentrate are KL divergences, which are unbounded. **Q7:**"In Lemma 4.3, can the authors specify what the probability Pr is? Also, since the (empirical) distance is Lipschitz, can't we deduce this lemma directly from Talagrand inequality?" The $\mathrm{Pr}(\cdot)$ refers to the probability over the finite data samples $x$ drawn i.i.d. from some arbitrary distribution $D_x$. That lemma is independent of the samples $y_i$, sorry for the confusion. The distance is actually not Lipschitz: it's Lipschitz in $\langle w, x \rangle$, but if $x$ is extremely large then a small change in $w$ can have a large change in $d(w, w^*)$. So we don't see how to apply Talagrand's inequality without assumptions on the distribution of $x$. **Q8:**"In which practical settings do we know the ground-truth conditioning number?" This is a good point and an open question which we shall explicitly state. We only require an upper bound on the condition number, and as our sample complexity scales logarithmically on the condition number, hence we did not consider this to be a major limitation. **Q9:**"$\widehat{W}, \widehat{\Sigma}$ may not be unique". We will state this more explicitly in the paper, but we do not require uniqueness. The reason for this is that as we are not trying to estimate the parameters themselves, and instead care about fitting the distribution of the data, any parameters that fit the observed distribution are acceptable. **Q10:**"How does the proposition escape known failures of the MLE." In the limitations section, we did mention that our proposition is succeptable to failues of MLE, such as requiring knowledge of the generative model's architecture, exacerbating bias, etc. The failure case mentioned by the reviewer would correspond to exacerbating bias, as having too few samples leads to a heavily overfit solution.
Rebuttal 1: Rebuttal: **General response to reviewers** Thank you for your thoughtful reviews, we appreciate the time and effort that you put into the review process. We are delighted that the reviewers found our paper to have: a solid theoretical foundation [NYne], a novel perspective [9ZPm], considerable improvements over existing literature [NYne], clarity with good proof sketches[77Ng, 9DZK, 9ZPm, NYne], and distribution-free results that are significant and remarkable [77Ng, 9DZK, 9ZPm, NYne]. We have addressed individual reviewer concerns as separate replies to the respective reviewers.
NeurIPS_2023_submissions_huggingface
2,023
Summary: 1. This paper shows how MLE can perform distribution learning in the setting of linear regression and in multi-layer ReLU networks. 2. This does not take a distribution on labels to be any specific form but rather unknows and tries to derive the sample complexity for a small total variational distance between the model’s conditional distribution and actual conditional distribution. 3. Improves the sample complexity of the previous work which suffers an exponential dependence on the ||W|| term. 4. Generalized how one layer Relu layer sample complexity can be extended to multilayer Relu networks. Strengths: 1. This work is a nice extension of the previous work mentioned in the reference[27]. It relaxes the assumption of previous work which assumed a fixed distribution of the labels. 2. The quality and clarity of the paper is good. Gives good proof techniques using the ideas from the learning theory. 3. Nice result for the linear regression in Theorem-4.1 where the sample complexity is linear in k: dimensionality of the labels which is some finite value in most of the cases. 4. This paper can be significant where the label distribution is unknown but the data distribution is known. Weaknesses: 1. The proof techniques assumed the distribution of the data is Gaussian which is rarely the case because the actual data distribution for generative models thought of some complex unknown distribution. 2. Even though theorem 4.2 proved the result nicely there is a dependence of the square of the dimensionality of data. So the curse of dimensionality remains and these bounds can be very loose. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Theoretically the distribution of labels x being unknown is correct. But in most cases, labels follow some multinomial distribution with some parameters. Does this assumption make the bound better? It would be better if some analysis is being done with the distribution of x being multinomial. 2. The proofs rely on the distribution of y being Gaussian. What would happen if you take the distribution of y as unknown and rather x being some multinomial as it happens in many practical settings of time series and computer vision? 3. There is some confusion in the notation. In line 36 it is mentioned that x denotes the labels and y is the data generated using that label. In linear regression, authors tried to predict y using x which is correct. In theorem 4.1. is that notation of y being the data and x being the labels being followed or is it reversed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. There is some limitation where we can have the conditional distribution of data to be Gaussian in those cases the proof techniques do not work. 2. Mostly in the literature of generative models the data distribution is taken to be some unknown complex distribution and labels are multinomial distribution with some appropriate parameters. These bounds don’t address that. So that is why there will be very limited use cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback and criticism. Please find below our response to your concerns. We wish to emphasize that we do not make assumptions about the distribution over the labels or the data, we only assume that the conditional distribution per-layer is Gaussian followed by a ReLU. **Weakness 1:** "The proof techniques assume the data is Gaussian". There's been a misunderstanding here: in Section 4.1 we assume that the *conditional distribution* of $y \mid x$ is Gaussian, and in Section 4.2 we assume that it is Gaussian followed by a ReLU. This assumption is not just a theoretical convenience. The additive Gaussian noise before the ReLU is used in state-of-the-art models like StyleGAN, where the learned Gaussian noises are called ``style'' variables: these give desirable stochasticity to the images, such as texture in hair, skin, etc. As we make no assumptions about the distribution of $x$, the resulting distribution of $y$ will be $p_Y(y) = \int_{x} q_X(x) p(y|x) dx$, and this distribution can be quite complicated based on the distribution of $x$. Furthermore, we can compose our theorem for one-layer networks multiple times. This allows us to give sample complexities for multi-layer ReLU networks. As this process of linear transformation followed by a non-linear activation is repeated $L$ times, the final distribution will be far more expressive than a simple Gaussian. For example, the distribution of images produces by StyleGAN has such a form. References: [1] Karras, Tero, Samuli Laine, and Timo Aila. "A style-based generator architecture for generative adversarial networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. **Weakness 2:** "There is a dependence on the square of the dimensionality of data... the bounds are loose". Unfortunately, learning a high-dimensional Gaussian with unknown covariance matrix in total variation takes $\tilde{\Omega}(d^2)$ samples; see [2]. Our Theorem 4.5 would solve their lower bound instance, so the same lower bound applies and our bound is not loose beyond possibly log factors. References: [2] Ashtiani, Hassan, et al. "Nearly tight sample complexity bounds for learning mixtures of gaussians via sample compression schemes." Advances in Neural Information Processing Systems 31 (2018). **Question 1:** "If $x$ has a multinomial distribution, does this improve the bound." Perhaps assuming multinomial distributions on $x$ can help simplify the analysis and remove the condition number dependence, but it's not going to remove the $d^2$ dependence (which appears even if $x = 0$ always). The reason we assume an unknown arbitrary distribution over $x$ is that it allows us to compose our guarantee over multiple layers. For example, if the generative model maps a Word2Vec embedding to an images, then at the first layer, the distribution of $x$ would be that of a Word2Vec embedding. Then for the second layer, we can treat $x$ as the output of the first layer. This allows us to compose our guarantee over multiple layers, and hence allows for an expressive distribution over the data. **Question 2:** "What would happen if the distribution of $y$ is unknown but $x$ has a multinomial distribution". This would be a significantly harder case, as this would imply that we have no way of writing the likelihood of $y$. The assumption that the conditional distribution is a Gaussian allows us to write the log-likelihood of $y$ in terms of $x$, regardless of the distribution of $x$. It's an interesting question what one could do with fewer assumptions on $y|x$, but it would need a pretty different approach. **Question 3:** "Line 36 says $x$ denotes the labels and $y$ is the data generated using that label ... does Theorem 4.1 use the same notation or is it reversed" We apologize for the confusion -- in line 36, we used labels in the sense of the class conditional labels that would be passed to a text-conditional generative model. In the context of linear regression, $x$ would be feature vectors (such as Word2Vec embeddings) and $y$ would be the data generated. We will change this to reduce confusion. --- Rebuttal Comment 1.1: Title: Response Comment: I have read the rebuttal of the authors. About the misunderstanding in Weakness-1 actually by the distribution of the data I mean the conditional distribution of data depending upon labels. Sorry for the typo and thankful to the authors for the correction. The specific StyleGAN example that they have referred to is correct. The additive learned noise is Gaussian but in the style component, there is another added term of the latent variable(W space concatenated with another learned matrix A) which follows some unknown distribution. So I am not sure how the resulting style variable in each layer follows the Gaussianity assumption. Anyway, the other answers and comments are satisfactory. I will maintain my rating. Thanks. --- Reply to Comment 1.1.1: Comment: Thank you for following up! Regarding the question about StyleGAN: indeed, as the reviewer has suggested, the style component has an additive term of the W space variable (which has an unknown distribution) multiplied by a matrix $A$. Note that this linear transformation by $A$ followed by additive Gaussian noise implies that, if we conditioned on the W space variable, the output of each generator layer is Gaussian before applying the non-linear activation. In our work, the *variable $x$ has an unknown distribution*, and $y|x$ is Gaussian. Hence, in our notation, we can take $x$ to be the W space variable (which has an unknown distribution), and the matrix $W$ in our work corresponds to the matrix $A$ in the StyleGAN architecture. We hope this resolves your confusion, thank you for your response!
null
null
null
null
null
null
A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization
Accept (oral)
Summary: Authors propose method of solving nonconvex-nonconcave saddle point problems with convergence rate O(eps^-2) by using gradient difference prediction and momentum acceleration to improve extragradient descent-ascent method. Proposed method is state-of-the-art in theory and leading in practice, including neural network learning with adversarial attacks task. Strengths: Quite elegant construction of the algorithm, which also allows one to obtain best-known convergence rate guarantees. Algorithms allows practitioners to address the most practically important setting of nonconvex-nonconcave problems efficiently using easy to implement algorithm which will surely replace analogous methods, judging from empirical study. Weaknesses: No significant weaknesses Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Authors could provide more extensive empirical study, because algorithm seems to be candidate for being widely-accepted in practice and it would be good to have more justifications of its efficiency. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Everything is okay Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: Authors could provide more extensive empirical study, because algorithm seems to be candidate for being widely-accepted in practice and it would be good to have more justifications of its efficiency. **A**: Thanks for your positive and valuable comments. To address your concerns, we have provided more extensive empirical study, and reported more experimental results, as shown in Figs. 3 and 4 in the PDF file (please see “global” response). That is, we have applied the proposed algorithm to solve some real-world applications, such as robust neural network in Figure 3 and Wasserstein GAN training in Figure 4. All the results show that the proposed algorithm performs much better than other algorithms such as GDA, MGDA and Smoothed-GDA, which also verified our theoretical results. All the results will be included in our final paper. The details are as follows: 1. More Results for Robust Neural Network in Figure 3 in the PDF file (please see “global” response) Under adversarial attacks including $\ell_\infty$ -norm FGSM and PGD attacks, the test accuracies of all the algorithms including GDA, MGDA, Smoothed-GDA and our EGDA are reported in Fig.3, where the $\ell_\infty$ -norm perturbation level $\varepsilon$ varies from $0.0$ to $0.4$. Note that for EGDA, the parameter $\tau$ is set to $3/4$, and the parameters $\alpha$ and $\beta$ in Smoothed-GDA are set to $0.2$ and $0.8$ as in [51], respectively. And the number of iterations is set to 100 for all the algorithms. All the results show that Smoothed-GDA and EGDA significantly outperform GDA and MGDA in terms of accuracy, and our EGDA also performs better than other algorithms including Smoothed-GDA. 2. More Results for Wasserstein GAN in Figure 4 in the PDF file (please see “global” response) Finally, we apply the stochastic version of the proposed EGDA algorithm to train Wasserstein GAN in [R3] on the MNIST dataset, and verify the effectiveness of our algorithm. Here the architectures of Wasserstein GAN (including its discriminator and generator) are set to be multi-layer perceptrons (MLP). The layer widths of the MLP in generator are 100, 128, 784, and the layer widths of the MLP in discriminator are 784, 128, 1. In addition, the batch size is set to 64, and the learning rate is 1e-4. Moreover, we compare our algorithm against one state-of-the-art method, Stochastic Gradient Descent Ascent (SGDA) by drawing their generated figures after 20k and 100k iterations, as shown in Fig.4. All the results show that our stochastic algorithm performs much better than SGDA and produces higher quality images, which shows the effectiveness of our algorithm. [R3] M. Arjovsky , et al., “Wasserstein generative adversarial networks,” ICML 2017. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your work on the final version of your paper! The rebuttal has clarified my questions. I decided to keep my overall rating the same. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for your positive and valuable comments. We are delighted to learn that our response effectively addressed your questions.
Summary: The authors have designed a single-loop accelerated algorithm for constrained min-max optimization problems of the form $\min_{x\in X}\max_{y\in Y} f(x,y)$. The algorithm provably converges in an approximate local stationary point in three particular setting: 1. Non-convex non-concave min-max optimization, where the stationarity is measured for the function $f(x,y)$. 2. Convex non-concave min-max optimization, and non-convex concave min-max optimization, where the stationarity is measured for function $\phi(x)=\max_{y'\in Y} f(x,y')$. The authors showed that their algorithm computes an $\epsilon$-stationary point, in $O(1/\epsilon^2)$ iterations. Finally, they experimentally verify their proposed algorithm. The authors' rebuttal addressed my concerns, and their additional empirical evidence complemented their already compelling results. For these reasons, I decided to increase my score. Strengths: The design and analysis of algorithms for non-convex non-concave minimax optimization is a fundamental problem, and the convergent results are indeed compelling. Moreover, the authors get the state-of-the-art for convex non-concave, and concave non-convex for the merit function they consider. Furthermore the main paper is well-written and easy to follow, and the algorithm seems to combine several interesting ideas. I verified the proofs of proposition 1 and 2, and the rest of the statement seems reasonable. I found the proofs of proposition 1 and 2 to be a bit dense, which made verifying them somewhat taxing. Weaknesses: Overall, I did not find some important weakness in the paper. As a suggestion, improving the readability and verifiability of the proofs could greatly benefit readers. Lastly, I came across a couple of typos: - In line 242, I think "conference" was meant to be "convergence". - When considering the proof of proposition 2 in the appendix, is $\widehat{u}_{t+1/2}$ identical to the one referred to in line 706? If so, clarifying this might prevent confusion. - In line 3 in Algorithm 1, do you also need to initialize $y_{-1}$ for the first iteration of the algorithm? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The recent work in [1], shows that the computation of an approximate stationary point is PPAD-complete when the action space of the two players is a joint. Notably, the findings in your paper seem to suggest a different outlook when the strategy space of the two players is a product space, which negates the hardness results. A comment from the authors addressing this observation would be greatly appreciated. Furthermore, the proofs of the propositions and lemmas presented appear to be quite opaque, and verifying them poses a bit of a challenge. It would be truly beneficial if the authors could share some additional insights into the process behind the design and analysis of the algorithm. I think it would be good if the authors could provide a better commentary about the derivations of the proofs. [1] "The Complexity of Constrained Min-Max Optimization" by Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Any assumptions needed for the theorems to hold are listed. I do not believe this work can have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: The recent work in [1], shows that the computation of an approximate stationary point is PPAD-complete when the action space of the two players is a joint. Notably, the findings in your paper seem to suggest a different outlook when the strategy space of the two players is a product space, which negates the hardness results. A comment from the authors addressing this observation would be greatly appreciated. [1] "The Complexity of Constrained Min-Max Optimization" by Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. **A**: Thanks for your positive and constructive comments. To address your concern, we will add this reference, and provide some discussions about [1] in our final paper. 1. In [1], the authors studied a constrained nonconvex-nonconcave problem, $\min_x\max_y f(x,y), s.t., g(x,y)<=0$, where the funciton $g$ is a linear funciton so that the constraint set is a polytope. While the constrained sets in our paper are closed, convex and compact sets in domains $X$ and $Y$. That is, the constrained set in our paper is only a special case of [1]. Therefore, the problem used in our work is much simpler than that in [1]. 2. We design an extra-gradient difference iteration in our algorithm, which is similar to the forms in [43] (i.e., the difference of gradients) to achieve the approximation of negative curvature of a Hessian matrix. That is, it goes beyond a first-order method,while a first-order method is studied in [1]. The negative curvature method [43] can escape from saddle points for non-convex optimization problems. Thus, we think that our method may escape from the saddle point of lower level nonconcave problem w.r.t. $y$, which is very hard to first-order methods. That is, our method can find a "better" solution of lower level nonconcave problem w.r.t. $y$, which is an important reason to reduce the hardness. All the discussions will be included in the revised manuscript. **Q**: Furthermore, the proofs of the propositions and lemmas presented appear to be quite opaque, and verifying them poses a bit of a challenge. It would be truly beneficial if the authors could share some additional insights into the process behind the design and analysis of the algorithm. I think it would be good if the authors could provide a better commentary about the derivations of the proofs. **A**: To address your concern, we will provide more details about the derivations of the proofs, and add more detailed explanations for the process behind the design and analysis of the proposed algorithm in our final paper. **Q**: In line 242, I think "conference" was meant to be "convergence". **A**: To address your concern, we have corrected in the revised manuscript. **Q**: When considering the proof of proposition 2 in the appendix, is $\widehat{u}_{t+1/2}$ identical to the one referred to in line 706? If so, clarifying this might prevent confusion. **A**: Yes, $\widehat{u}_{t+1/2}$ is identical to the one referred to in line 706, and we have clarified this in the revised manuscript. **Q**: In line 3 in Algorithm 1, do you also need to initialize $y_{-1}$ for the first iteration of the algorithm? **A**: Yes, we need to initialize $y_{-1}$, and have added such initialization in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate your response to my questions. Moreover, the empirical evidences presented are quite compelling. After reviewing the feedback and rebuttals from the other reviewers, I've decided to increase my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for noticing that we have concrete contribution, and raising the score. We are delighted to learn that our response effectively addressed your questions.
Summary: This work proposes a single-loop extra-gradient difference acceleration algorithm to find an \epsilon-stationary point for constrained minimax optimization, which pushes forward the best complexity bounds of NC-NC, C-NC, NC-C problems to \mathcal{O}(\epsilon^{-2}). The proposed approach can deal with more general problems as it does not require monotone or structural assumption. Moreover, for the NC-C problem, the authors prove that the proposed method has better complexity bound under the stationarity of \phi. Experiments are conducted to validate the method empirically. The results show that it can achieve better convergence rate when comparing with the related methods. Strengths: 1. The theoretical contributions are significant. The method employs a novel prediction point scheme to obtain the quasi-cocoercivity property, which relaxes the assumption requirements. Additionally, the paper provides a thorough analysis of the convergence complexity bound, demonstrating its superiority over the current state-of-the-art approaches. 2. The paper is well organized and easy to follow. The logical flow of ideas is well-structured, enhancing the overall readability and comprehension of the presented contents. 3. Empirical studies are conducted to validate the method in both synthetic and real tasks. Weaknesses: There are some possible limitations where the paper could be further improved. 1. I suggest the authors to undertake additional analysis of the algorithm's time complexities, both theoretically and empirically. This deeper exploration would provide valuable insights, particularly for potential industrial applications. 2. The absence of experiments conducted on the C-NC problem should be explained within the paper. 3. There are some empirical evidences that seem to be inconsistent with the theoretical result, for example, FEG has a fast theoretical rate but is less effective in practice; GDA may not converge to stationary points. The authors may explain more about these in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: no negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: I suggest the authors to undertake additional analysis of the algorithm's time complexities, both theoretically and empirically. This deeper exploration would provide valuable insights, particularly for potential industrial applications. **A**: Thanks for your positive and valuable comments. To address your concern, we will add the time complexity of the proposed algorithm for solving min-max problems in the final version. In our algorithm, three gradients need to be calculated for each iteration update, the time complexity for constrained NC-NC setting is O((m+n)\epsilon^{-2}), where m and n denote the dimensions of x and y, respectively. Moreover, we have conducted more empirical experiments and compared the performance of all the algorithms over the running time, as shown in Fig. 1 in the PDF file (please see “global” response). All the results show that the proposed algorithm converges significantly faster than other algorithms, as verified by theoretical and empirical analysis. All the results will be included in our final paper. **Q**: The absence of experiments conducted on the C-NC problem should be explained within the paper. **A**: To address your concern, we will add some discussions about the C-NC problem in the revised manuscript. In fact, there are few C-NC minimax problems in real-word applications, such as the convex-nonconcave zero sum game in [R1]. If necessary, we will add some experiments for solving such problem in the revised manuscript. [R1] G. Su, et al. Secrecy-oriented user association in ultra dense heterogeneous networks against strategically colluding adversaries. IET Commun., 2022. **Q**: There are some empirical evidences that seem to be inconsistent with the theoretical result, for example, FEG has a fast theoretical rate but is less effective in practice; GDA may not converge to stationary points. The authors may explain more about these in the paper. **A**: To address your concern, we will make the following clarifications. In fact, FEG has a fast theoretical rate for the problem with an additional structured assumption. However, the NC-NC function used in Figure 1 in this paper does not satisfy the structured assumption. As a result, FEG has a slow convergence rate. In addition, we have conducted more experiments of FEG for the function in Figure 2 (please see “global” response), and reported empirical results in Fig. 2 in the PDF file. The results show that FEG converges much faster than GDA and EAG. --- Rebuttal Comment 1.1: Comment: Thanks for your positive and valuable comments. We will improve the final version based on your comments.
Summary: This paper discusses a new extra-gradient difference acceleration algorithm for solving constrained nonconvex-nonconcave minimax problems. The algorithm introduces a "quasi-cocoercivity property" and momentum acceleration to significantly improve the convergence rate in the constrained NC-NC setting. The algorithm attains a complexity of $O(\epsilon^{-2})$ for finding an $\epsilon$-stationary point of the function $f$, which outperforms the best-known complexity bounds. The paper also provides theoretical analysis and comparisons with existing algorithms. Strengths: As a person who works in minimax optimization, I can make a fair judgment of this work. This paper presents a novel extra-gradient difference acceleration algorithm for solving constrained nonconvex-nonconcave minimax problems, which improves the existing convergence rate and outperforms the best-known complexity bounds to $O(\epsilon^{-2})$. The paper also provides a comprehensive comparison with existing algorithms and a theoretical analysis of the algorithm's performance. I understand the "extra-gradient difference prediction" step as the key to the success of convergence rate improvements. In addition, I went through the proofs of Theorems 1 and 2 in detail. Overall, this paper provides valuable contributions to the field of minimax optimization and presents a promising algorithm for solving constrained NC-NC problems. Weaknesses: The paper assumes that the objective function satisfies certain structural assumptions, which may limit its practical applications. Also the writing style might not be as friendly for readers unfamiliar with the topic (I found it sufficiently clear though). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: ---Can you explain more about the "quasi-cocoercivity" property and discuss how it improves the convergence rate in the constrained NC-NC setting? Is this an absolutely necessary property for the improved convergence rate to hold? ---Can your algorithm or its variants be applied to other minimax optimization problems beyond the constrained NC-NC setting? ---There do exist some typos. For example in Eq. (4) in Line 215, missing factor 2 in the denominator. Should be minor but the authors should check more and correct them. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This paper is purely theoretical and does not admit negative social impacts to my best knowledge. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: Can you explain more about the "quasi-cocoercivity" property and discuss how it improves the convergence rate in the constrained NC-NC setting? Is this an absolutely necessary property for the improved convergence rate to hold? **A**: Thanks for your positive and valuable comments. To address your concern, we will add some explanations about the "quasi-cocoercivity" property, and discuss how it improves the convergence rate in the constrained NC-NC setting in our final paper. From the perspective of theoretical analysis, we use the "quasi-cocoercivity" property to offset some residual terms produced by Propositions 1 and 2. As a result, we can obtain a descent sequence of the potential function, i.e., G_t. From the perspective of algorithmic intuition, the "quasi-cocoercivity" property is related to the extra-gradient difference iteration, which can improve the convergence rate. Thus, we think that it is a necessary property to improve the convergence rate. **Q**:Can your algorithm or its variants be applied to other minimax optimization problems beyond the constrained NC-NC setting? **A**: To address your concern, we will make some discussions in the revised manuscript. The proposed algorithm can be extended to some problems beyond the constrained NC-NC setting. For example, for the robust neural network training problem, which does not require the compactness of the domain X (x is the parameter of the neural network), it goes beyond the constrained condition in the domain X. Furthermore, we can also extend the proposed algorithm to the stochastic setting to effectively solve large-scale problems. In addition, we also provide more experimental results in Figs.3 and 4 in the PDF file for more real-world applications (please see “global” response). **Q**:There do exist some typos. For example in Eq. (4) in Line 215, missing factor 2 in the denominator. Should be minor but the authors should check more and correct them. **A**: To address your concern, we have corrected these typos in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the clarifications in their rebuttal, and I have raised my score from 7 to 8. I do encourage the authors to take more passes on their manuscript for typographical polishing/corrections before their final camera-ready submission. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer for raising the score. We must carefully corrected these typos on our manuscript before our final camera-ready submission.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chairs: Thank you very much for the constructive comments. More experimental results in the PDF file, and the details are as follows: **1**. We have conducted more empirical experiments and compared the performance of all the algorithms over the running time, as shown in Fig. 1 in the PDF file. **2**. We have conducted more experiments of FEG for the function in Figure 2, and reported empirical results in Fig. 2 in the PDF file. **3**. More Results for Robust Neural Network in Figure 3: Under adversarial attacks including $\ell_\infty$-norm FGSM and PGD attacks, the test accuracies of all the algorithms including GDA, MGDA, Smoothed-GDA and our EGDA are reported in Fig.3, where the $\ell_\infty$-norm perturbation level $\varepsilon$ varies from $0.0$ to $0.4$. Note that for EGDA, the parameter $\tau$ is set to $3/4$, and the parameters $\alpha$ and $\beta$ in Smoothed-GDA are set to $0.2$ and $0.8$ as in [51], respectively. And the number of iterations is set to 100 for all the algorithms. All the results show that Smoothed-GDA and EGDA significantly outperform GDA and MGDA in terms of accuracy, and our EGDA also performs better than other algorithms including Smoothed-GDA. **4**. More Results for Wasserstein GAN in Figure 4 Finally, we apply the stochastic version of the proposed EGDA algorithm to train Wasserstein GAN in [R3] on the MNIST dataset, and verify the effectiveness of our algorithm. Here the architectures of Wasserstein GAN (including its discriminator and generator) are set to be multi-layer perceptrons (MLP). The layer widths of the MLP in generator are 100, 128, 784, and the layer widths of the MLP in discriminator are 784, 128, 1. In addition, the batch size is set to 64, and the learning rate is 1e-4. Moreover, we compare our algorithm against one state-of-the-art method, Stochastic Gradient Descent Ascent (SGDA) by drawing their generated figures after 20k and 100k iterations, as shown in Fig.4. All the results show that our stochastic algorithm performs much better than SGDA and produces higher quality images, which shows the effectiveness of our algorithm. [R3] M. Arjovsky , et al., “Wasserstein generative adversarial networks,” ICML 2017. Pdf: /pdf/25893be4f88b45d9604a21a9a59bcf471ceabc05.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online Corrupted User Detection and Regret Minimization
Accept (poster)
Summary: This paper presents an important online learning problem named LOCUD to learn and utilize unknown user relations from disrupted behaviors to speed up learning and identify the corrupted users in an online setting. Also, the authors propose a novel bandit algorithm RCLUB-WCU, and devise a novel online detection algorithm OCCUD based on RCLUB-WCU’s inferred user relations. Extensive experiments demonstrate that the proposed methods can achieve superior performance over previous bandit algorithms and high corrupted user detection accuracy. Strengths: 1. The paper is scientifically sound. 2. The clarity of the presentation is easy to follow. 3. Extensive experiments of the proposed methods have superior performance than other baselines. Weaknesses: 1. The introduction section of the paper lacks sufficient emphasis on the motivation behind the proposed methods. The authors should provide a more comprehensive analysis of the current issues and challenges in the relevant fields, clearly indicating how their research work addresses and improves upon these challenges. This will help readers better understand the significance and contributions of the proposed methodology. 2. The abstract section should be more concise. It should effectively highlight the key innovations and improvements introduced by the proposed methodology. 3. The paper provides limited discussion and summary of the relevant literature. To strengthen the research methodology, the authors should include a more extensive review of existing research methods, along with an analysis of their strengths and weaknesses. 4. The experimental content is not enough to effectively prove the superiority of the method. It is suggested that the authors add more dimensional experiments and give a comprehensive analysis and explanation of the experimental results, thus making the conclusions in the paper more convincing. 5. The dataset used in the experiments is relatively small, which may limit the generalizability of the findings. It is recommended to supplement a large-scale real dataset for performance validation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Provided above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer Lisv Thanks for the positive comments and valuable suggestions for further improving our work. Our responses are listed below. ## 1. About the improving the writing and contents of the introduction, abstract, and related work sections: Thanks for giving these detailed and valuable suggestions on improving the writing and contents of our paper. For the introduction section, we will follow your advice to illustrate more on the motivations behind our proposed methods by providing more discussions on the current issues, challenges, and how we address these challenges. For the abstract section, we will highlight our contributions more concisely. For the related works, we will follow your suggestion to add more extensive discussions on existing relevant works with a more detailed analysis of their strengths and limitations in the Appendix. ## 2. About the experimental content: In our experiments, the number of datasets and the exploration of different settings are richer or comparable to related works [3, 7, 17, 18, 20, 24]. In the previous works of clustering of bandits, [20] uses one synthetic dataset and two real-world datasets, and they do not conduct any studies in different settings. In [24], they employ one synthetic dataset and one real-world dataset with parameter study only on the synthetic dataset. In [18], they use one synthetic dataset and two real-world datasets and explore the influence of cluster numbers. In the previous works of bandits robust to corruption, [3] uses one synthetic dataset and one real-world dataset, and test both contextual and non-contextual settings only on the synthetic dataset; in [7], they use one synthetic dataset and two real-world datasets with study about the difference caused by attacking on a single context or more than one contexts. In the previous works of offline corrupted user detection, [17] uses two real-world datasets with one ablation study to explore how the components in their model influence the performance. Compared to the above previous works, our work includes the results on one synthetic dataset and three real-world datasets. And we add two additional experiments in two real-world datasets to observe the algorithm's performance with different corruption levels and cluster numbers. The number of datasets and the exploration of different settings are richer or comparable to related works. Apart from the experimental contents, we also give solid theoretical performance guarantees to prove the superiority of our proposed methods. We would add more experiments with more discussions and explanations. We would appreciate it if the reviewer could provide more specific suggestions on adding some experiments that could further improve our study. ## 3. About the size of the dataset for experiments: In the previous works of clustering of bandits and bandits with corruption, the sizes of datasets in most works are not very large and are close to ours [8, 18, 24, 32]. Therefore, following these works [8, 18, 24, 32], we extract a proportion of the large dataset to be the dataset used for experiments. We agree that performing experiments with a larger dataset would enhance the generalizability of our findings. Following your valuable advice, we have done some experiments on an enlarged dataset extracted from Yelp (the same rule of extraction, 20000 users and 20000 items, 10 times larger than the dataset used in our paper). The results are shown in Figure 4 and Table 3 in the global PDF. We can see our proposed algorithms also outperform baselines on this larger data set. We will conduct more experiments on larger datasets in a later version following your valuable advice. Finally, we thank the reviewer again for the positive feedback, the efforts in reviewing our paper, and giving valuable advice to further improve our work. --- Rebuttal Comment 1.1: Title: Read author's rebuttal Comment: I appreciate the authors' detailed feedback on my comments. I would like to consider the rebuttal in my final comments. --- Reply to Comment 1.1.1: Title: Thanks for your review Comment: Dear Reviewer Lisv, Thank you for reading our response and the positive review. We are grateful for your time and effort in the review and your valuable suggestions. Sincerely, Authors of Paper 5846
Summary: The paper considers the following bandit setup. There are $u$ users organised into $m\ll u$ clusters. Each cluster has vector $\theta$ attached to it. On step $t$ the learner deals with a user uniformly selected from the pool and picks an arm $a$. If this is a bona fide user, the learner gets average reward $x'_a\theta$, where $x_a$ is the feature vector for the arm and $\theta$ is the vector for the cluster the user belongs to. There is a number of corrupted users though, who give reward $x'_a\theta + \eta + c$, where $\eta$ is noise and $c$ (a bounded quantity) is corruption. We have a twofold problem of minimising the regret and identifying the corrupted users. The paper presents an algorithm based on a graph of connections between users built by observations of their behaviour. The regret upper bound is matched by a lower bound. It is shown that with high probability we identify the corrupted users correctly and after a while end up with correct clusters. Strengths: I think this is an interesting result and a strong guarantee. The explicit modelling of corrupted user behaviour may seem restrictive at first, but it covers many possibilities. The authors were very careful to relax all the requirements as much as possible (e.g., consider sub-Gaussian noise etc). The algorithm is intuitive. Weaknesses: No obvious weaknesses. The appeal of the result may be limited for some NeurIPS participants. Some minor suggestions (not reflected in my evaluation of the paper): 1. I do not think quantifiers should be used as in At $\forall t$, for any fixed unit vector $z$ ... (Assumption 3.) Using "every" here will not blow the volume of the paper out of proportions, but will improve readability. 2. Use $\verb!\left(...\right)!$ in formulas like (7) to get larger brackets. 3. I do not find the abbreviations used in the paper convenient and phrases such as CW-OFUL-Ind outperforms LinUCB-Ind because it considers the corruption, but worse than RCLUB-WCU easy to parse. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer GoSQ We are very grateful for your strongly positive comments and appreciation. Below are our responses to your minor suggestions. ## 1. About the minor suggestions 1-2 on improving the format and readability: Thanks for giving these detailed suggestions. We will revise these two points following your valuable advice. ## 2. About the abbreviations: Thanks for your valuable suggestion. We will add a table to make it more convenient for the readers to find the abbreviations. Finally, thanks again for your positive feedback, time to review our paper, and detailed and valuable suggestions on improving our work.
Summary: The authors introduce an online learning problem called LOCUD (Learning and Online Corrupted Users Detection from bandit feedback) in which the aim is to detect a small fraction of the overall users with corrupt behaviors; corrupt users occasionally perform undesirable actions, but otherwise mimic normal user behavior, making them challenging to detect. The paper then proposes a framework that leverages the relations between users to form semantic clusters, and uses the clusters to identify corrupt users. Specifically, the authors propose RCLUB-WCU (Robust CLUstering of Bandits With Corrupted Users) to progressively prune a fully connected graph of users to clusters of connected components based on user interactions and preferences. Then, OCCUD (Online Cluster-based Corrupted User Detection) estimates a robust and non-robust estimation of each user's preferences, and identifies a user as corrupt if the gap between the two estimates exceeds a carefully designed threshold. OCCUD is repeatedly invoked within RCLUB-WCU to continually prune and refine the relational structure amongst the users. Experiments one on synthetic dataset and three real-world datasets show the proposed approach is able to detect more corrupted users while achieving the least amount of regret over time than competing methods. Strengths: * The paper introduces a challenging but relevant problem of trying to identify corrupt users despite sporadic behavior in an online dynamic environment. This problem is especially relevant for sites like Amazon and Yelp which often contain users that exhibit corrupt behavior. * The proposed approach of leveraging relational information between users to more effectively detect corrupt users is intuitive, and the experimental results suggest its effectiveness. * Analyses and bounds related to regret are given for the RCLUB-WCU/OCCUD algorithm, with proofs provided in the Appendix. * The paper is generally well-written and follows a logical progression. Weaknesses: * Experiments are performed on a small number of small datasets, potentially limiting the generalizability of the proposed approach. Performing experiments with a wider range of larger datasets would significantly benefit the claims made in the paper. * No empirical runtime analysis is provided for the proposed approach or any of the competing methods. Runtime analysis can help practitioners decide what method is likely to work best for their particular problem. * Only one baseline method is compared to the proposed approach for the AUC results (Table 1). Where are the results for the other methods? * Additional experimental results including different corruption levels and number of clusters are provided in the Appendix, but those results (Figures 4 and 5) only shows regret, and only compares the proposed approach with two baseline methods. Can the authors provide results for the other baseline methods, as well as AUC results for these additional and potentially insightful experiments? * Minor clarity improvements: * Use authors' names to cite previous work. "The work [5] proposes..." -> "Ding et al. (2022) propose...". * The legends in Figure 5 make the subplots hard to read. Figure 5 is also not color-blind friendly, consider adding markers or using different line styles for different methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why are the dataset sizes so small (e.g., 1,400 users and 800 items for Amazon)? What are the original sizes of the datasets used in the experiments? * Where are the error bars for Table 1, and Figures 4 and 5? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No, the authors have not addressed the limitations or the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer HbaT: Thanks for the positive comments and valuable suggestions for improving our work. We will revise the paper in the final version following your advice. Our responses are as follows. ## A. Responses to the Weakness: ### 1. About the small dataset: Please refer to the response to your first question later about the dataset. ### 2. About the runtime analysis Thanks for the valuable suggestion. We give the run time results with $T=1,000,000$ on our computer with CPU of Intel(R) Xeon(R) Gold 6240C CPU @ 2.60GHz, which has 36 cores and 2 threads in each core in Table 1 in the global PDF. Overall there are no large differences since all the algorithms are computationally efficient. ### 3. About the baseline for online detection We compare OCCUD with one baseline when reporting the detection results in Table 1, because our paper is the first work to study the online corrupted user detection from bandit feedback, and no previous baselines exist. The baselines used to compare the online recommendations with RCLUB-WCU (in Figure 3) can not detect corrupted users, so they can not be compared with OCCUD. Also, the offline detection methods [34, 6, 17, 29] need to know all the user information in advance to derive the user embedding for classification, so they cannot be directly applied in online detection with bandit feedback. Therefore, we compare our OCCUD algorithm with a straightforward GCUD algorithm (which is proposed by us as a baseline to show that the design of OCCUD is non-trivial). Thanks for the valuable suggestion on adding more baselines. We add another straightforward baseline named GCUD2 which compares the non-robust estimators of clusters and users. We have added this GCUD2 baseline to Table 2 in the global PDF. The results also show the superior outperformance of our OCCUD algorithm. We will also consider adding more feasible baselines in a later version. ### 4. About experimental results with different corruption levels and number of clusters: Thanks very much for this valuable suggestion. In Figures 4 and 5 in the paper, we only compare with two baselines because these two baselines perform the best among all the baselines. The regret results of all the recommendation algorithms can be found in Figure 2 in the global PDF. The AUC results for online detection algorithms can be found in Figure 3 in the global PDF. These additional results also clearly show the good performance of our proposed algorithms. We will add these results in the final version following your valuable advice. ### 5. About the minor clarity improvements: Thanks for giving these detailed suggestions; we will do the revisions accordingly. ## B. Responses to the questions ### 1. About the size of the dataset: (1) In the previous works of clustering of bandits and bandits with corruption [8, 18, 24, 32], the sizes of datasets in most works are usually not very large and are close to ours. Therefore, following these works [8, 18, 24, 32], we extract a proportion of the large dataset to be the dataset used for experiments. We agree that performing experiments with a wider range of larger datasets would significantly benefit our claims. We also have some results on an enlarged dataset extracted from Yelp (the same rule of extraction, 20000 users and 20000 items, 10 times larger than the dataset used in our paper). The results are shown in Figure 4 and Table 3 in the global PDF. We can see our proposed algorithms also outperform baselines in this larger dataset. We will conduct more experiments on more and larger datasets in a later version following your valuable advice. (2) The original data sizes are: Movielens: 2,113 users and 10,197 items; Amazon: 1,429 users and 900 items; and Yelp: 1,987,929 users and 150,346 items. ### 2. About the error bars: Thanks for this valuable suggestion. Table 1, Figures 4 and 5 with error bars can be found in Table 2 and Figures 2 in the global PDF, respectively. We will add these error bars in the final version. Again, we thank the reviewer for the positive comments, the time spent on reviewing our paper, and the valuable advice on further improving our work. --- Rebuttal Comment 1.1: Title: Thank You Comment: I thank the authors for addressing the majority of my concerns and have increased my score accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer HbaT, We are delighted to learn that we have successfully addressed your concerns. Your insightful feedback has played a crucial role in enhancing the quality of our work, and we sincerely appreciate your dedication in reviewing our paper thoroughly. Thank you once again for your time and constructive feedback. Sincerely, Authors of Paper 5846
Summary: This research paper introduces an innovative method for learning and utilizing unknown user relations from disrupted behaviors to enhance the learning process and identify corrupted users in an online setting. To achieve this, a new bandit algorithm (RCLUB-WCU) is proposed, along with an online detection algorithm that leverages user relations inferred by RCLUB-WCU. The paper also presents a regret upper bound for RCLUB-WCU, which closely matches the lower bound with respect to T (the number of rounds) up to logarithmic factors and performs well even in degenerate cases. The experiments conducted on synthetic and real-world datasets demonstrate significant improvements in performance compared to previous bandit algorithms Strengths: 1. The paper presents a novel application to learn unknown user relations in their preferences from potentially corrupted feedback. At the same time, the paper shows how to leverage the learned relations to speed up learning as well as adaptively detect the corrupted users online from bandit feedback. Overall I think this paper makes a significant contribution to the literature on online learning from corrupted feedback as well as detection of adversarial users in multi-user online learning setting. 2. Experiments on synthetic and real-world datasets clearly indicate lower reward regret of proposed approach in comparison to five other baselines from the past literature on online clustering of bandits. At the same time, the algorithm is able to identify corrupted users with higher accuracy than a simple baseline that directly compares the robust-estimators of preference vectors of a user and its corresponding cluster. 3. The authors back up their results with theoretical analysis and guarantees on the performance of the proposed algorithm. Weaknesses: 1. It seems to me that the performance of proposed approach could potentially be sensitive to the nature of underlying user relations in their preferences and the tightness of the detected clusters. For example, to my understanding, this algorithm might not work well if there are too many tight clusters. However, if one chooses too big clusters, then the solution might compromise on personalization as users in a loose cluster might not be represented accurately by a common preference vector defined for the cluster. The paper does not provide any discussion on the impact of cluster sizes on the algorithm. 2. The approach requires multiple parameters to be specified. (e.g. regularization parameter, confidence radius parameter, threshold parameter, edge detection parameter). There is no discussion provided on the sensitivity of the results to the choices of these parameters (in the main draft at least). 3. The results in Table 1 on detection of corrupted users are not very clear. To begin with, it's not clear what do numbers in the table indicate? I assume they indicate recall of true corrupted users. If so, where are the precision numbers? Where are F1 scores given that thre is class imbalance? Authors need to clearly indicate what they are measuring and also provide an explanation in case they aren't measuring both precision and recall. Also it would be nice to have comparison with multiple baselines. Further, authors could potentially enrich the results with other variants of the baseline, e.g. another baseline variant could simply compare the non-robust estimators of cluster and user. (I believe the current baseline compares the robust estimators of a user and its cluster). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Page 9, lines 341-345: Authors claim that the performance of the proposed algorithm improves with proportion of corrupted users in the dataset. Seems like this argument is made based on the three datapoints (from three real-world datasets). This argument could be potentially strengthened by evaluating all algorithms while varying the number of corrupted users in one of the datasets. 2. Page 6: Line 220: It says that the threshold is carefully designed to handle the estimation uncertainty... How difficult/easy it is to tune this threshold? How critical this threshold is to the performance of the approach? 3. Pages 4-5, Lines 188-204: It seems like the subscript `t` is used to denote both timestamp (bandit round) as well as the cluster? Also I see subscript `s` to indicate the bandit iteration which should ideally be denoted by t ? 4. In Equation (2), are you considering data from all t -1 previous rounds? Or are you using iterative update and only include the data from previous round, i.e. (t -1)^{th} round? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors should cover any other limitations of this work in addition to potential limitations I highlighted above: Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Responses to Reviewer wNgt: We appreciate the reviewer for the positive comments and valuable advice. We will incorporate the suggestions into the final version. ## A. Responses to the Weakness: ### 1. Impact of the cluster number and sizes: Thanks for your suggestion of adding discussions on the impact of the cluster number and sizes. As you noted, the performance of our algorithm is influenced by the cluster number $m$. As shown in Theorem 3, the regret upper bound of our algorithm increases with a larger $m$. The performance decreases with larger $m$ is inevitable for all clustering of bandits (CB) algorithms. The regret lower bound in Theorem 4 also supports this claim, theoretically indicating that no CB algorithms can avoid decreasing the performance with a larger $m$. Our empirical evaluations on different problem instances with different $m$ also validate this statement: (i) as detailed in Figure 5 and also mentioned in [18], the performance decreases with larger $m$ is observed for all CB algorithms. (ii) With different $m$, our algorithm outperforms the baselines (as detailed in Appendix G.2). For the second comment on the concern of ``big loose" clusters, our algorithm automatically learns the cluster sizes over time. Our algorithm clusters users adaptively during the learning process, and it will cluster all users correctly after some interactions (theoretically supported by Lemma 1), where users in each ground-truth tight cluster share the same preference vector. Therefore, the sizes of the clusters are not chosen by our algorithm but determined by the underlying problem instance. As shown in Theorem 3, the regret upper bound depends on the cluster number $m$ and does not depend on the sizes of the clusters. We will provide some discussions in the final version. ### 2. About the parameters: Following the publicly available code of the previous works on the clustering of bandits and bandits with corruption [12, 20], we set the regularization parameter $\lambda=1$, the confidence radius parameter $\beta=1.5$, the threshold parameter $\alpha=0.2$, and the edge deletion parameter $\alpha_1=1$. These are classic parameters in the previous clustering of bandits and linear bandits with corruption works [8, 12, 18, 20, 24]. To ensure the robustness, our algorithm does not introduce additional parameters compared to previous approaches [8,12,18,20,24]. And we use the same values for these parameters as in previous works to make fair comparisons with the baselines. We will specify these parameters in the experiment section and add more experiments to show the sensitivity of the parameters. ### 3. (a) About Table 1: Following the previous works on offline corrupted user detection [6, 17, 29, 34], we use AUC as the metric for online detection. We mentioned it in lines 349-351, and we will make it clearer by giving more descriptions and specifying AUC in the caption of Table 1. ### 3. (b) About more baselines: We compare OCCUD with one baseline when reporting the detection results in Table 1, because our paper is the first work to study the online corrupted user detection from bandit feedback, and no previous baselines exist. Thanks for the valuable suggestion on adding more baselines. We have done some experiments on the proposed baseline by simply comparing the non-robust estimators of cluster and user (GCUD2). The results (AUC) are shown in Table 2 in the global PDF. The results show the superior outperformance of our OCCUD algorithm. We shall consider adding more feasible baselines in a future version. Besides, please note that when we report the recommendation results (where there are more baselines applicable) in Figure 3, we compare our approach with several baselines. ## B. Answers to the questions: ### 1. About the relation between algorithm performance and proportion of corrupted users: Thanks for your valuable comment. We have done some experiments on the Movielens dataset with different proportions of corrupted users (10\%, 20\%, and 50\%). We compare our algorithm with CLUB and SCLUB (they perform the best among baselines on Movielens). The results are shown in Figure 1 in the global PDF. We can see that when there are more corrupted users, RCLUB-WCU's regret increases less than CLUB and SCLUB and gains a larger advantage. This is consistent with our argument in Line 341-345 of the paper. We will add these experimental results in the final version to strengthen the argument. ### 2. About the threshold: As mentioned in the above response, in this threshold, we do not introduce additional parameters to be tuned than previous works on the clustering of bandits and bandits with corruption [8, 12, 18, 20, 24], which ensures that our method is robust. And in empirical evaluations, we do not tune the threshold parameter $\alpha$ and the edge deletion parameter $\alpha_1$; we set them to be the same as the previous works to make fair comparisons [12, 20]. ### 3. About the subscripts t and s: Yes, $t$ is used to denote both timestamps and clusters; we use $V_t$ to denote the cluster for user $i_t$ inferred by the algorithm at round $t$. We will change $s$ to $t$ and make the notations clearer following your comments. ### 4. About the iterative update: Eq.(2) shows that the estimated cluster vector is the solution of a weighted ridge regression considering data from all t -1 previous rounds. But in the algorithm, we can use the iterative update at each round that only includes the (t-1)-th data (Line 8 in Algo.1) thanks to the nice closed-form solution (Lines 194-195) of Eq.(2), which makes the algorithm computationally efficient. Thanks again for the reviewer's appreciation and the suggestions on improving our work. --- Rebuttal Comment 1.1: Title: Read author's rebuttal Comment: I would like to thank authors for the detailed response to my original review. I have read author's rebuttal and will consider their arguments in my final rating. --- Reply to Comment 1.1.1: Title: Thanks for your review Comment: Dear Reviewer wNgt, Thanks for reading our response and your positive review. We really appreciate your time and effort in the review and your valuable suggestions. Sincerely, Authors of Paper 5846
Rebuttal 1: Rebuttal: We sincerely appreciate all the reviewers for the positive comments, the time spent on reviewing our paper, and the valuable advice for improving our work. We have done some experiments for your reference. Please refer to the global PDF. Pdf: /pdf/b574aa0a3f9edfdfa07d9de954d6caf812521e61.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Restart Sampling for Improving Generative Processes
Accept (poster)
Summary: This paper proposes analyzes SDE and ODE-based samplers for diffusion models. Based on the analysis, this paper introduces a new solver, Restart, for sampling from diffusion models. The effectiveness of Restart is demonstrated on various unconditional and conditional generation tasks. Strengths: - The paper is well-written. - The proposed method shows better performance than EDM at moderate NFE regions. Weaknesses: I am willing to raise the score by 1 or 2 points if the authors address my concerns satisfactorily. **Weakness 1 : Ambiguity regarding Theorems 1 and 2.** - For Theorem 1, if we set $[t_{\min},t_{\max}] = [0,T]$, the terms for contracted errors $TV(p_T^{ODE_\theta},p_T)$ and $TV(p_T^{SDE_\theta},p_T)$ vanish because $p_T^{ODE_\theta}$, $p_T^{SDE_\theta}$, and $p_T$ are all identically Gaussian distributions. Then, we end up with terms depending on $\delta$, $\epsilon_{approx}$, and $t_{\max} - t_{\min}$ only, so Theorem 1 does not provide any insight into how ODE and SDE have distinct "winning regions", as illustrated in Figure 1 (b). The proof and claim for Theorem 1 should be reformulated such that even with $[t_{\min},t_{\max}] = [0,T]$, Theorem 1 explains how ODE and SDE have winning regions. - Likewise, I think Theorem 2 also should be proven for the entire interval $[0,T]$, so we can directly compare the errors for SDEs, ODEs, and Restart. **Weakness 2 : (Possibly) weak performance on the small NFE regime.** - How does Restart perform in the small NFE regime (NFE $\leq 30$)? In Figure 3, the figure cuts off just before Restart and ODE intersect in the small NFE regime. This seems to contradict the claim that Restart combines the best of both ODE and SDE. Moreover, given the large size of SOTA diffusion models, it is crucial that diffusion samplers work well in the small NFE regime as well. - How does Restart compare to recent fast samplers such as [1], [2], [3] in the small NFE regime? [1] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, NeurIPS, 2022. [2] Fast Sampling of Diffusion Models with Exponential Integrator, ICLR, 2023. [3] Denoising MCMC for Accelerating Diffusion-Based Generative Models, ICML, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Question 1** : How does Restart perform on FFHQ-1024 image generation (score function in https://github.com/yang-song/score_sde) compared to Improved SDE or ODE (Heun in EDM)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions. **Q1: For Theorem 1, if we set $[t_{min},t_{max}]=[0,T]$, the terms for contracted errors TV(p_{T}^{ODE_θ},pT) and TV(p_{T}^{SDE_θ},pT) vanish because p_{T}^{ODE_θ}, p_{T}^{SDE_θ}, and p_T are all identically Gaussian distributions. Then, we end up with terms depending on δ, ϵapprox, and $t_{max}−t_{min}$ only, so Theorem 1 does not provide any insight into how ODE and SDE have distinct "winning regions", as illustrated in Figure 1 (b). The proof and claim for Theorem 1 should be reformulated such that even with [t_{min},t_{max}]=[0,T], Theorem 1 explains how ODE and SDE have winning regions. Likewise, I think Theorem 2 also should be proven for the entire interval [0,T], so we can directly compare the errors for SDEs, ODEs, and Restart.** A: Thank you for raising this question. First, we agree that if we set $[t_{min},t_{max}]=[0,T]$, Restart is the same as ODE and has no theoretical advantage. **However, we emphasize that the utility of restart is to reduce the accumulated error between $[t_{max}, T]$.** The interesting point is how this accumulated error is corrected during "Restart iterations". Therefore, setting $t_{min}=0$ and $t_{max}=T$ is a mis-application of Restart or SDE, as the accumulated error = 0 by definition. The main goal of Theorem 1 and 2 is to study how the already accumulated error changes using different samplers, and to understand their ability to self-correct the error by stochasticity. In essence, these theorems differentiate samplers based on their performance post-error accumulation. For example, by tracking the change of accumulated error, Theorem 1 shed light on the distinct "winning regions" of ODE and SDE: ODE samplers have smaller discretization error and hence excel at the small NFE regime. In contrast, SDE performs better in large NFE regime where the discretization error is negligible and its capacity to contract accumulated errors comes to the fore. We will clarify this point in our updated draft. **Q2: How does Restart perform in the small NFE regime (NFE ≤30)? How does Restart compare to recent fast samplers in the small NFE regime?** A: Thanks for the suggestions. We would like to highlight that the Restart sampler is compatible with fast ODE samplers, as they can be integrated in the deterministic backward process of Restart. As suggested by the reviewer, we compare Restart with DPM-Solver [3], a commonly-used fast ODE solver. In order to further accelerate Restart, we also use DPM-Solver in the main/Restart backward processes of Restart. We’ve included the FID versus NFE curves in Fig.1(a) in the rebuttal PDF in the “Summary of Updates” comment above. The results show that the Restart consistently outperforms the DPM-Solver with a NFE ranging from 16 to 36. This demonstrates Restart's capability to excel over ODE samplers, even in the small NFE regime. Surprisingly, when paired with the DPM-Solver, Restart achieves a FID score of 2.11 on VP [1] when NFE=30, which is significantly lower than any previous numbers (even lower than the SDE sampler with a NFE $\ge 1000$ in [1]), and make VP model on par with the performance with more advanced models (such as EDM in [2]). We will include these results in our updated draft. **Q3: How does Restart perform on FFHQ-1024 image generation (score function in https://github.com/yang-song/score_sde) compared to Improved SDE or ODE (Heun in EDM)?** A: Thanks for the suggestion. We use the pre-trained FFHQ-1024 checkpoints in the code base pointed out by the reviewer (https://github.com/yang-song/score_sde), which is based on the Score-SDE-VE model with NCSN++ architecture. For stochastic baseline, we compare with the default Predictor-Corrector (PC) sampler [1] in the code base instead of Improved SDE, due to its need for time-consuming tuning of the hyper-parameters ($S_{tmax}, S_{tmin}, S_{noise}, S_{churn}$) on FFHQ. For ODE baseline, we compare with Heun as suggested by the reviewer. We set the NFE to 300 for all samplers. Since it’s prohibitively expensive to compute the FID score on this dataset, we qualitatively assess the visual quality from different samplers. We’ve included these images in Fig.2 in the rebuttal PDF in the “Summary of Updates” comment above. We observe that the Restart sampler produces notably superior image quality compared to other baselines. Heun sampler fails to generate clean images, and there is noticeable noise and artifacts of images generated by PC sampler compared to Restart. It indicates that the Restart sampler can successfully scale to 1024 resolution, and better balance the speed and quality in comparison to both ODE and SDE samplers. We will include the experiment in our revised version. *[1] Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. ICLR 2021.* *​​[2] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022.* *[3] Lu, Cheng, et al. "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps." Advances in Neural Information Processing Systems 35 (2022): 5775-5787* --- Rebuttal Comment 1.1: Title: Updated Score Comment: I have raised the score by 1 point. I am not convinced by the result on FFHQ-1024, because I know it is possible to cherry-pick good samples on this dataset. I recommend the authors provide numerical results, e.g., FID, in the revised paper. --- Reply to Comment 1.1.1: Title: Thank you for your reply Comment: We would like to thank the reviewer for the reply and suggestion. We would like to clarify that the samples provided in our rebuttal PDF were not selectively chosen. For a comprehensive comparison, extended samples for each method can be found in the `ffhq.pdf` via the anonymous link https://anonymous.4open.science/r/restart_rebuttal-3EE1/ffhq.pdf . These samples were generated using random seeds ranging from 1 to 9. A consistent observation is that the Restart sampler delivers markedly better image quality than other baselines throughout the batch. The Heun sampler struggles to produce clean images, while images generated by the PC sampler display noticeable noise and artifacts compared to Restart, using the Score-SDE-VE model. Given the extensive sampling time required for high-resolution data (FFHQ-1024) – approximately 38 seconds per image on a single A100 GPU – and the need for 50k generated images to achieve a reliable FID evaluation, we are not able to provide numerical results within the discussion period. We will to include these results in the revised version once they are available.
Summary: ODE-based samplers plateau in performance while SDE-based samplers deliver higher sample quality. The paper attributes this difference to discretization errors and accumulated errors. Based on these, the authors propose a sampling algorithm called Restart which alternates between the forward diffusion process and backward ODE. Strengths: 1. The authors provide a theoretical explanation of the phenomenon that ODE samplers outperform SDE samplers in the small NFE regime but fall short in the large NFE regime. 2. The experimental results on image generation tasks validate the effectiveness of the method. Weaknesses: The proposed method relies on several hyperparameters (e.g. $S_{noise}, N_{restart}, i, K_i, t_{min, i}, t_{max, i}$), and the hyperparameters differ in different tasks. It would be hard to effectively tune these parameters in real application. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Since the derivation of the Restart algorithm is motivated by the theoretical analysis of sampling error, is there a way to choose those hyperparameters based on the error analysis? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The author addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions. **Q1: The proposed method relies on several hyperparameters (e.g. $N_{restart,i},K_i,t_{min},i,t_{max},i$), and the hyperparameters differ in different tasks. It would be hard to effectively tune these parameters in real applications. Since the derivation of the Restart algorithm is motivated by the theoretical analysis of sampling error, is there a way to choose those hyperparameters based on the error analysis?** A: Thanks for the question. Our choice of hyperparameter is partially motivated by theory. For instance, for a small $t_{min}$, we usually pick the parameter $t_{max} \approx B$, where $B$ is the radius of the dataset; this ensures that the contraction factor $\lambda$ in Theorem 4 is sufficiently large. Note that our theoretical results contain a number of Lipschitz constants, which are difficult to estimate in practice; thus our theoretical upper bounds help us decide high-level scaling but do not accurately prescribe a precise parameter choice. As another example, a larger accumulated error requires bigger Restart intervals and more Restart iterations, thus we use a larger $K$ and Restart interval for the weaker VP model [1] on CIFAR-10, compared to the EDM model [2]. *[1] Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. ICLR 2021.* *[2] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022* ​​ --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my score based on the clarification. Please put the discussion in the final version.
Summary: The papers propose a new sampling method for diffusion models, termed Restart Sampling. The authors first theoretically analyze the error propagation in diffusion models for stochastic and deterministic samplers under Wasserstein-1 distance and show that ODE samplers have a lower-discretization error but SDE samplers contract the initial distribution error as we run more steps. This agrees with the intuition and the experimental findings that support that ODE samplers are better for low NFEs but their performance flattens for more NFEs. Based on this analysis, the authors propose a method that tries to achieve the best of both worlds: it contracts the initial error with more steps and achieves the same discretization error as the ODE samplers. The implementation of the new method is very straightforward: one runs the ODE sampler and every K steps reverts back to some prior diffusion time using the forward model. Strengths: The authors study a relevant problem in diffusion models. The proposed solution is simple, effective and novel. The theoretical results motivate the method and show clearly the differences between the SDE and the ODE samplers. The presentation of the paper is excellent. The authors show many experimental results, starting from toy models and going all the way to state-of-the-art text-to-image diffusion models. I think the paper and the method is of interest to the community and the audience of NeurIPS. Weaknesses: The error propagation of diffusion models has been studied before. The results that I am aware of are from the papers "Sampling is as easy as learning the score", "Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for DDIM-type samplers" and "The probability flow ODE is provably fast". The first studies the error propagation of the SDE sampling method and the latter two the propagation of errors for deterministic samplers. It would be beneficial to compare with these works, highlight potential differences in the approach and the final results, etc. Also, apart from the Stochastic and the ODE samplers, there is a whole family of samplers that satisfy the same Fokker-Planck equations and hence give the same marginals, e.g. see the work "Fast Sampling of Diffusion Models with Exponential Integrator" and also some of the samplers used in the "Elucidating the Design Space of Diffusion-Based Generative Models" (EDM) paper. It would be interesting to compare theoretically and experimentally to these samplers. Another concern I have is that the evaluation is only as thorough as it should have been and it is only done for relative high NFEs. Since evaluating the performance of a trained model is relatively easy, I would expect a more thorough benchmarking. If the performance of Restart sampler breaks for low NFEs, it is useful to know it and acknowledge it in the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I think it would be useful to know: * how the performance is for lower NFEs, e.g. 20 sampling steps? * the behavior of the sampler across more datasets and NFEs. For example, it would be useful to include more comparisons with the EDM paper (and the references therein) for different datasets. * what are the differences in the theoretical analysis compared to prior results known for error-propagation in diffusion models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions. **Q1: The error propagation of diffusion models has been studied before ... What are the differences in the theoretical analysis compared to prior results known for error-propagation in diffusion models.** A: Thank you for pointing out these related works. We will add a more detailed comparison to the draft. Briefly, our result differs from the above-mentioned papers in the following key way: [1,2] aim to control the discretization error going from $T \to 0$ during the backward process. [1] does this using Girsanov’s theorem, and [2] does this using a more involved analysis as one cannot apply Girsanov in the absence of diffusion noise. In contrast, our analysis focuses on how Restart iterations over the sub-interval $[t_{min} , t_{max}]$ reduces the accumulated discretization error from $[t_{max},T]$. There are some similarities with [3], notably they also have a corrector stage where they run overdamped LMC. One important difference from [3] is that our Restart iterations are not interleaved with ODE integration. A remarkable flexibility of our analysis is that **we need to make very little assumption about the accumulated error up to $t_{max}$**. Consequently, one can adopt a wide range of integration techniques from [t_{max}, T], and apply Restart at the end (over $[t_{max}, t_{min}]$) to reduce the accumulated integration error between $[t_{max}, T]$. In contrast, the analysis in [1,2,3] is focused on specific ODE/SDE integration algorithms. We also note that [3] is a concurrent work and appeared on Arxiv after our submission to NeurIPS. We will include these discussions in our revised version. **Q2: Apart from the Stochastic and the ODE samplers, there is a whole family of samplers that satisfy the same Fokker-Planck equations and hence give the same marginals, e.g. see the work "Fast Sampling of Diffusion Models with Exponential Integrator"(DEIS) and also some of the samplers used in the EDM paper. It would be interesting to compare theoretically and experimentally to these samplers.** A: We would like to note that the samplers highlighted by the reviewer are encompassed within the either stochastic or ODE samplers we've discussed, and we've already made some comparisons in our paper. Specifically, we've compared the ODE sampler (Heun) in EDM and the recommended stochastic sampler in EDM (Improved SDE). Please refer to Fig.3, Table.1, Fig.8, and Fig.10 in the draft for details. We appreciate the suggestion to compare faster ODEs with exponential integrators. We've incorporated DDIM in the draft, which serves as the first-order counterpart to DEIS. Our new empirical results demonstrate that Restart can improve over DPM-Solver, which is also a higher-order ODE solver that utilizes exponential integrators. The subsequent question's response provides more detailed results. **Q3: Another concern I have is that the evaluation is only as thorough as it should have been and it is only done for relatively high NFEs. How is the performance for lower NFEs, e.g. 20 sampling steps?** A: Thanks for the suggestion. We employ the recommended Heun’s 2nd order ODE sampler [4] in the main backward process of Restart in our draft. Since Heun’s 2nd order ODE is not targeting the low NFE regime, we did not emphasize the performance of such regime. To validate the effectiveness of the Restart sampler in the low NFE regime, we use the faster ODE sampler in [5] (DPM-Solver) in the backward processes of Restart. We’ve included the FID versus NFE curves in Fig.1(a) in the rebuttal PDF in the “Summary of Updates” comment above. The results show that the Restart consistently outperforms the DPM-Solver with an NFE ranging from 16 to 36. This demonstrates Restart's capability to excel over ODE samplers, even in the small NFE regime. We will include these results in our updated draft. **Q4: The behavior of the sampler across more datasets and NFEs. For example, it would be useful to include more comparisons with the EDM paper (and the references therein) for different datasets.** A: Thanks for the suggestions. Following the suggestion, we evaluated the Restart sampler on smaller NFEs, particularly when paired with the DPM-Solver, as previously discussed. We've also highlighted the efficiency of the Restart sampler on the FFHQ-1024 dataset, as depicted in Fig.2 of the rebuttal PDF. We will include more datasets in the updated draft. *[1] Sampling is as easy as learning the score, Chen et al., ICLR 2023* *[2] Restoration-degradation beyond linear diffusions: A non-asymptotic analysis for DDIM-type samplers, Chen et al., ICML 2023* *[3] The probability flow ODE is provably fast, Chen et al., arXiv 2305.11798* *​​[4] Elucidating the Design Space of Diffusion-Based Generative Models, Karras et al., NeurIPS 2022.* *[5] Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps, Lu et al. NeurIPS 2022.* --- Rebuttal Comment 1.1: Title: Respone to Rebuttal Comment: Thank you for your rebuttal. Please incorporate these discussions in the camera-ready version of your work, if possible. I will increase my score to 7.
Summary: By analyzing of the trade-off between good sample quality and sampling time of both ODE and SDE-based generative models, a restart sampling strategy is proposed by this paper to combine the advantages of ODE and SDE sampling methods. The author proves two theorems that estimate the upper bound on the total error measured by the Wasserstein distance between generated and data distributions of ODE, SDE, and restart sampling methods respectively. It is illustrated that the total error can be decomposed into two parts: additional sampling error generated by discretization error and contracted error generated by the accumulated total error from previous sampling steps. Moreover, it is proved that ODE-based samplers have smaller additional sampling errors and SDE-based samplers have smaller contracted errors. So Comparing the three upper bounds on the total error, it can be proved theoretically that restart sampling yields a smaller total error because its additional sampling error and contracted error are both small. Finally, a range of experiments are done by authors which shows empirically that: 1. The total error of the restart sampler is indeed smaller than that of others. 2. Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. 3. Restart sampler better balances text-image alignment/visual quality versus diversity than previous samplers. Strengths: 1. This paper is written with meaningful motivation and a clear structure. The restart sampling method proposed by this paper is innovative, simple, and effective. 2. The reasonableness and effectiveness of the resampling method are proved both theoretically and experimentally. 3. The upper bounds on the total error of the three sampling methods give us an intuitive understanding of the advantages and disadvantages of the three sampling methods. Weaknesses: 1. The effectiveness of restart sampling method on high-resolution image synthesis is not confirmed. For example, a comparison of sampling speed and accuracy on ImageNet 128*128, 512*512 should be added. 2. The paper experiments on the sensitivity analysis of the number of restart iterations K, but there is no experiment on the sensitivity of another hyperparameter: the position and length of the restart interval. 3. As pointed out in the paper, the contracted error further diminishes exponentially with the number of repetitions K though the additional error increases linearly with K. Figure 4 illustrates this trade-off phenomenon, too. So it may be hard to find a suitable K to make the restart algorithm work for different datasets or tasks globally, making it difficult to apply. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Although the sampling step decreases by applying the restart strategy, the repetiness of ODE solver in the restart interval seems time-consuming. Moreover, does NFE computed in the restart interval added to the total NFE in Figure 3? 2. Why does the total error remain the same using the ODE sampler when NFE>20? What are the results of three samplers with NFE<20? Is it consistent with the trend in Figure 1(b)? 3. Why not compare to DPM-solver? Moreover, why not apply restart to DPM-solver to see if there are consistent benefits? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions. **Q1: The effectiveness of Restart sampling method on high-resolution image synthesis is not confirmed. A comparison of sampling speed and accuracy on ImageNet-128/-512 should be added.** A: Thanks for the suggestion. We agree with the reviewer that including the suggested datasets can strengthen our experimental results. However, we would like to note that we have already conducted experiments at a resolution of 512x512, on the Stable Diffusion model. In particular, Restart demonstrates a superior FID and CLIP/aesthetic score trade-off, underscoring its scalability to higher-resolution images. As suggested by reviewer kog8, we've also presented a qualitative demonstration of Restart's effectiveness on FFHQ-1024 (please refer to Fig.2 in the rebuttal PDF). **Q2: There is no experiment on the sensitivity of another hyperparameter: the position and length of the restart interval.** A: Thank you for the suggestion. In Fig.9 of our paper, we illustrated the sensitivity of $t_{min}$ while keeping $t_{max}$ constant. We additionally add more sensitivity analysis of both the position and length of the restart interval on CIFAR-10. We only implement one Restart interval in all the experiments. We include the results in Fig.3 in the rebuttal PDF. For sensitivity to Restart length $t_{max}-t_{min}$, we fix $t_{min}$ at 0.06 for VP and 0.14 for EDM. In theory, a longer interval enhances contraction but may add more additional sampling errors. Again, the balance between these factors results in a V-shaped trend in our plots (Fig.3(a)). In practice, selecting $t_{max}$ close to the dataset's radius usually ensures effective mixing when $t_{min}$ is small. For sensitivity to $t_{min}$, Fig.3(b) shows that a moderately small $t_{min}$ minimizes the approximation error post-restart on CIFAR-10. However, the contraction effect diminishes as $t_{max} - t_{min}$ diminishes. **Q3: It may be hard to find a suitable $K$ to make the restart algorithm work for different datasets or tasks globally, making it difficult to apply.** A: Thanks for pointing this out. We agree that fully optimizing the hyper-parameter $K$ would be challenging. However, in general, the quality of generated images initially improves and later worsens with an increasing $K$. This trend makes it feasible to identify an appropriate $K$ value through a straightforward binary search. In our experiments, we've found that choosing a suitable value for $K$ that leads to improved performance is relatively easy. For example, introducing a Restart interval with $K=2$ at small time consistently outperforms the baselines for all the datasets given the same NFE. In addition, as a heuristic, one could set $K$ to reasonable values by following the recipe that at a smaller time $t$, a larger $K$ is necessary to contract more accumulated errors. Nevertheless, we acknowledge that determining the optimal $K$ for different Restart intervals could be intricate. We will delve deeper into this in future studies. **Q4: Although the sampling step decreases by applying the restart strategy, the repetiness of ODE solver in the restart interval seems time-consuming. Moreover, does NFE computed in the restart interval added to the total NFE in Figure 3?** A: Yes, the total NFE reported in the paper includes both the NFE in Restart intervals as well as the NFE in the main backward process. Even though each Restart interval involves several function evaluations, the overall FID-NFE trade-off in Restart remains superior to previous methods. This is primarily because Restart allows for a reduced NFE during the main backward process. **Q5: Why does the total error remain the same using the ODE sampler when NFE>20 (Fig.2)? What are the results of three samplers with NFE<20? Is it consistent with the trend in Figure 1(b)** A: Fig.2 plots the Pareto frontier of total Error versus NFE. For ODE, we have conducted experiments with NFE 20,40,80,160,320,640, with total error 0.89, 0.90, 0.90, 0.90, 0.90, 0.90 respectively. This reveals that the Pareto front stabilizes at an NFE of 20 with an error of 0.89. A larger NFE didn’t reduce the error since the discretization error is already small. We conducted additional experiments (please see Fig.1(a) in the rebuttal PDF) and verified that the trend is consistent when NFE is less than 20. We will include the result in the updated draft. **Q6: Why not compare to DPM-solver? Why not apply Restart to DPM-solver to see if there are consistent benefits?** A: Thanks for the suggestions. As suggested by the reviewer, we compare Restart with DPM-Solver. In order to further accelerate Restart, we also use DPM-Solver in the main/Restart backward processes of Restart. We’ve included the FID versus NFE curves in Fig.1(a) in the rebuttal PDF in the “Summary of Updates” comment above. The results show that the Restart consistently outperforms the DPM-Solver with an NFE ranging from 16 to 36. This demonstrates Restart's capability to excel over ODE samplers, even in the small NFE regime. It also suggests that Restart can consistently improve other ODE samplers, not limited to the DDIM, Heun in the paper. Surprisingly, when paired with the DPM-Solver, Restart achieves an FID score of 2.11 on VP [1] when NFE=30, which is significantly lower than any previous numbers (even lower than the SDE sampler with an NFE $\ge 1000$ in [1]), and make VP model on par with the performance with more advanced models (such as EDM in [2]). We will include these results in our updated draft. *[1] Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. ICLR 2021.* *​​[2] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022.* --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the rebuttal. I like the direct comparison with dpm-solver. I suggest the authors carefully update the paper in the final revision to make clear the points raised in the review. I have increased my score.
Rebuttal 1: Rebuttal: # Summary of Updates We would like to thank all reviewers for their constructive feedback. We have revised our draft according to all the valuable comments. Below we summarize updates in the revised version. We also include all the new figures in the PDF files attached. ## 1. More experiments In response to Reviewer YGP7, kn2f and kog8, we have compared Restart with DPM-Solver and incorporated experiments in the low NFE regime. We’ve included the FID versus NFE curves in Fig.1(a) in the rebuttal PDF. The results show that the Restart consistently outperforms the DPM-Solver with an NFE ranging from 16 to 36. As recommended by Reviewer kog8, we qualitatively validate the effectiveness of Restart on FFHQ-1024 dataset (Fig.2 in the rebuttal PDF). Additionally, we have included smaller NFEs in the study of total error versus NFE (Fig.1(b) in the rebuttal PDF), and additional sensitivity analysis of hyperparameters (Fig.3 in the rebuttal PDF), as suggested by Reviewer YGP7. ## 2. Discussion on related works / Clarification on theorems As suggested by Reviewer k2nf, we’ve added the discussion of the difference between Restart and prior or concurrent works. In response to Reviewer kog8, we have also provided clarifications on our theorems, emphasizing their objectives in examining the behavior of various samplers with accumulated errors. Pdf: /pdf/aac25e1f1e6a3b84cb20ce514b5225b3f5c7b7c0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposed a sampler that balances sampling speed and quality by adding noises and restarting the process. They provide theoretical analysis to show a better upper bound of this method compared to original ODE and SDE samplers. Experiments are done to verify their claims. Strengths: 1. Authors identify the main cause of different performances of SDE and ODE in different regimes. And by taking advantage of the contraction by adding noises, they balance the speed and quality of the sampler. 2. The theoretical analysis is clear and well-written. 3. The experiments are thorough with a good explanation of the choices of hyperparameters. 4. The experiments show good results for the proposed method. Weaknesses: See questions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I wonder what the wall clock time looks like between restart, sde, and ode. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and thoughtful feedback. Below we address specific questions. **Q1: I wonder what the wall clock time looks like between Restart, SDE, and ODE.** A: Thank you for the question. The wall clock time during sampling is approximately proportional to the NFE (number of function evaluations), which is reported in the paper. This is because the primary computational bottleneck during sampling lies in evaluating the neural networks. Any other overhead, in comparison, is negligible. Therefore, when comparing Restart, SDE, and ODE, their wall clock time will closely align with their respective NFEs. --- Rebuttal 2: Title: Reminder from AC Comment: Dear reviewer, The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper. Thanks, Area Chair
null
null
null
null
null
null
Sharp Calibrated Gaussian Processes
Accept (poster)
Summary: Motivated by the observation that the posterior variance of a Gaussian process is often poorly calibrated, the authors propose an alternative approach of attaching predictive quantiles to the posterior mean. In essence, their approach minimises the width of the predictive quantiles under an empirical calibration constraint computed on held-out validation data. By satisfying the empirical calibration constraint, the authors prove that the predictive quantiles are indeed approximately the right quantiles (Theorem 5.3). The approach is tested and compared against other calibration approaches in a toy example and on seven data sets from the UCI data set repository. The results show that it outperforms other techniques in terms of sharpness. Strengths: To begin with, I would like to thank the authors for their submission. ## Strengths * The paper is easy to follow and generally well written. I found only a few typos. * The problem that the posterior variance of a GP may be poorly calibrated is highly relevant, so methods that attempt to attack this problem, like the proposed approach by the authors, are certainly important. * Accompanying the method with a theoretical result that guarantees correctness (Theorem 5.3) reassures the practitioner. * I really like Section 5.2, where the authors spend some effort on reducing the computational cost in practice. They could have just left it at (7) and state that the optimisation problem would have to be computed for every confidence level $\delta$ of interest, but of course the approach in Section 5.2 is much more elegant. * The experiments appear to show that the proposed approach is sharper than alternatives. I, however, have some doubts about the experimental section. Please see below. ## General Comments and Suggestions * Should the title of the paper be "Sharply Calibrated GPs" instead of "Sharp Calibrated GPs"? I think "Sharp(ly)" should be an adverb, since it modifies "calibrated". * On line 75, you use $\mathcal{D}\_{\text{tr}}$ without introducing the symbol first. This is confusing, because you have just introduced $\mathcal{D}$, not $\mathcal{D}\_{\text{tr}}$, as the training data. * On line 75, you introduce the cut-point term as $\beta_\delta \sigma_{\mathcal{D}}(\delta, x)$. At this point, you should really better explain this term by answering the following questions: Should $\sigma_{\mathcal{D}}(\delta, x)$ be interpreted as a standard deviation? If so, how do you make sense of that it depends on $\delta$? If not, then what is $\sigma$? If you're learning $\sigma$, why do you also need $\beta_\delta$? Can you not absorb $\beta_\delta$ into $\sigma$? You should also mention that $\beta_\delta$ may be negative. Without that information, (1) doesn't make much sense for small $\delta$. * On line 81: Do you mean "are small" or "are as small as possible"? This nuance is very important and changes the meaning substantially. * On line 119, you say that the log-marginal likelihood does not account for calibration. Depending on what precisely you mean by calibration, I think that this is false: the log-marginal likelihood is an empirical estimate of the KL divergence, and the KL divergence certainly accounts for the "whole distribution" and therefore the calibration. * In Section 5.1, is $\sigma\_{\mathcal{D}\_{\text{tr}}}(\theta, x)$ the posterior variance of the GP where the kernel parameters are now the parameters that we optimise over? From line 146 onwards, you don't actually explain this! Since this is a crucial part of your construction, I think the exposition would be better if you were to clearly explain this somewhere around line 158. * On line 175, you state that (5) can be replaced by (6) with further explanation. Since this is an important step in the derivation of your algorithm, I think that it deserves a careful explanation. Moreover, it is not true that (5) and (6) are equivalent, since $q\_{\text{lin}}$ is linearly interpolated. I think the exposition would benefit from a little more care here. * Line 268: "calibration" -> "calibrated" * Could you add to the legend of Figure 2 what the squares and crosses are? Why does it look like there are two "lines of squares/crossed"? Weaknesses: ## Weaknesses ### Assumption 4.1 Not Obviously Satisfied I agree that Assumption 4.1 is obviously satisfied for the variance of a kernel. However, for the inverse length scale, I can believe that Assumption 4.1 might be satisfied, but this is not obvious at all. Would the authors be able to produce a proof that the posterior variance of a GP is monotonic in the inverse length scale? ### Theorem 5.3 Might Not Be Valid The proof of Theorem 5.3 crucially relies on Theorem 1 by Marx et al. (2022). This Theorem 1, however, operates in the setting where all pairs $(x_i, y_i)$ are sampled i.i.d. (see Section 2 of Marx et al., 2022). But in the setting of GPs, which is the setting of the submission, the pairs aren't independent, because they are correlated by the sample from the underlying GP! This means that Theorem 1 might not apply, which means that Theorem 5.3 might not be valid. Could the authors comment on this? ### Result of Section 7.1 Looks Questionable In Figure 2, you state that your approach produces a 99% confidence interval. However, if you look at the right side of Figure 2.(a), then the confidence interval completely misses the data! It hence looks like the shaded region is not actually a 99% confidence interval, which makes me wonder whether the predictions are actually well calibrated. ### Bolded Results Are Not Significantly Best Results Throughout the main paper and the supplement, you bold the score with the best average. However, if $x_1 \pm e_1$ and and $x_2 \pm e_2$ are such that $x_1 < x_2$, but $x_1 + e_1 \ge x_2 - e_2$, then you cannot actually conclude with confidence that $x_1 \pm e_1$ is really lower than $x_2 \pm e_2$, because the difference might be explained by random fluctuations. In other words, you should really only bold results that are the best results at reasonable statistical significance. Currently, because of this issue, I think that the results throughout are misrepresented. ### What Are STD and NLL in Table 2.1? The premise of the paper is that you discard the predictive variance and instead produce calibrated predictive quantiles at one or multiple given confidence levels. This means that the predictions now consist of a mean and associated intervals. Therefore, the predictions are no longer probabilistic, so I really don't understand what the STD and NLL in Table 2 are! (For given a mean and an interval, how can you compute a probability?) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ## Conclusion Although I think the proposed approach is very interesting, I am mainly worried about the validity of Theorem 5.3, the soundness the result in Section 7.1, the use of boldness to present the experimental results, and the STD and NLL metrics in Table 2.1. Therefore, at this point, I cannot accept this submission and must unfortunately recommend rejection. However, if the authors are able to address the following points, then I am willing to change my reject to an accept: * Please see if Theorem 5.3 is really flawed, and, if it is, whether it can be fixed. * Please correct the use of boldness throughout the main paper and the supplement. * Please investigate whether the predictions in Figure 2.(a) really are calibrated. * Please explain what the STD and NLL metrics in Table 2.1 are. EDIT The authors have largely addressed my criticisms, so I have increased my score to a borderline accept. I believe that the submission might deserve a higher score, but I am not comfortable recommending any higher without seeing a revised version of the PDF. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and your helpful comments. Please find our answers to your questions below. If you feel that we have adequately addressed your concerns and questions, we would appreciate if you would consider updating your score. **Assumption 4.1.** Capone et al. (2022) have shown that Assumption 4.1 holds for the lengthscale of a class of stationary kernels (Lemma 3.3). In fact, the proof presented therein can be employed to show that Assumption 4.1 holds for any hyperparameters that lead to a monotonous increase in the Fourier transform. We will include these details in the revision, and give thorough examples of hyperparameters that satisfy these properties. Note also that several results that employ the so-called "fill distance" show that Assumption 4.1 holds for a broad class of kernels as the lengthscale becomes very large or very small (see, e.g., Chapter 11 in *Scattered Data Approximation* by Holger Wendland, 2011). **Theorem 5.3.** We believe there may be some confusion regarding our assumptions as well as the requirements of Theorem 5.3. In a standard Bayesian setting, a GP prior would, as you point out, result in joint dependence amongst the observations. However, we note that our method is inherently frequentist in nature: we use GPs simply as mechanisms to produce kernelized mean functions and confidence intervals, rather than as a true Bayesian prior. Because of this frequentist setting, we make the standard frequentist assumptions about data: there is some unobserved (but fixed) function $f$, and $y \mid f(\boldsymbol x)$ is an i.i.d. distribution, meeting the requirements of Theorem 1 from Marx et al. (2022). Again, we emphasize that our model's mean and confidence intervals---which follow the same functional form as a Bayesian GP posterior---are purely for computation and do not arise from any assumed prior distribution. We will make this more clear in the revision, and we will explicitly specify in Section 2 that we make the assumption of i.i.d. input/observation pairs. **Use of bold font.** We have rewritten the tables and changed the exposition of the results; see the included PDF. In particular, we now only use bold font whenever the results are best statistically. Furthermore, we employ mean plus minus standard error, which best illustrates statistical significance in this case, and state this clearly in the revised version. **Section 7.1/Figure 2.** The main purpose of the toy example was to illustrate differences amongst how each method generated confidence intervals, as opposed to showing how well-calibrated they are. To this end, we deliberately picked a setting with little data and training inputs that are far apart, as this is where the difference between approaches is most obvious. Note that, in this setting, our method has no calibration guarantees, as our guarantees only hold in regions sufficiently near training data (i.e. in regions where test data and train data are i.i.d. draws from the same data distribution). This has caused some confusion, so we have revised the toy example. The toy example is now calibrated and also highlights the differences between approaches. Please refer to the submitted PDF for details. **STD and NLL in Table 2.1.** Thank you for this comment. Following the method presented in Section 5.2, we can use our method to compute predictive quantiles for arbitrary $\delta$. Hence, our model implicitly specifies a cumulative distribution function, obtained by inverting the quantile function. This allows us to compute the standard deviation and the negative log likelihood of the predictions. --- Rebuttal Comment 1.1: Comment: Thank you for your reply to my rebuttal. I appreciate the clarifications. I am largely happy with the adjustments and will increase my score, as promised. I would like to make two final remarks: 1. In the attached PDF, Figure 1(c) for the regular GP looks suspicious: Did you all the noise variance to the predictions? Did you maximise the marginal likelihood and condition the GP on all the data? The uncertainty regions currently suspiciously increase, which makes me wonder whether the GP is really conditioned on all the data in the plot. 2. I would appreciate it if you could add a clarification in the main body about how the STD and NLL are computed in the experiments. --- Reply to Comment 1.1.1: Comment: Thank you very much for your comments. We are very happy to see that we were able to adequately address your questions. Regarding Figure 1c: the figure shows training and calibration data and the GP is not conditioned on all the data, hence the changes in uncertainty. This is described in Section 5.1 of the paper: we separate training from calibration data, allowing the GP to be calibrated on data that is also out of distribution.
Summary: The authors propose a novel method for calibrating Gaussian process posterior variances using held-out data. In particular, they train a new, separate, GP using the held out data for the variance, using the GP trained on the original dataset for the mean. They do this in a way which approximately maximises the sharpness (roughly speaking, minimises the variance). The calibration properties of the method are supported by theory. This is supported by an empirical evaluation of the calibration error on synthetic data and of the sharpness on real-world data. Post-discussion: The authors have provided considerable improvement in their presentation of the results, and have added some extra results which further improve the clarity. Strengths: 1. (major) the idea is novel, and appears to be theoretically well supported, although I did not review the proofs in the supplementary material. 2. (major) The main claimed advantage of the proposed method is that is produces sharper predictives than comparable methods, which is well supported by the real-world experimental results, not withstanding the points raise in questions. 3. (major) Code is provided, which should improve reproducibility. It is based on a widely used framework, which increases potential for impact. Weaknesses: 1. (major) The paper is not sufficiently clear. The method is fairly well explained, but the evaluation is extremely hard to follow, details of which are in the questions section. One particular area for straightforward correction is with Table 2: the values with the lowest mean are highlighted but there are several rows where the estimates are overlapping, so this should be clarified by highlighting. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. It's not clear to me how to interpret figure 2. I guess that the crosses are training data, what about the squares? Are they the calibration data? 2. I guess Table 1 is for the synthetic data plotted in figure 2, but I don't think this is stated anywhere. 2. When you train the vanilla GP, do you add the calibration data to the training data? 3. The sharpness of the predictives is claimed to be an advantage, but really it looks like you cover less of the data as a result in the toy example? 4. I appreciate that the main purpose of Table 2 is to compare the sharpness amongst the well-calibrated methods, but I think it would be highly worthwhile to see how a vanilla GP compares on these same metrics. 5. With respect to line 103, it is mentioned that the assumption holds for isotropic kernels. I think that it also works out if you transform an isotropic kernel to have different lengthscales in each dimension, and I think you've implied this elsewhere, is that right? Some suggestions and typos: * For the hyperparameters, could you use $θ$? * It would be helpful to have a bit more explanation in section 3. For example, instead of saying 'are small for every $δ$' perhaps you could say that the goal is to minimise that quantity for every $δ$. * line 127: the z-score -> z-score * Summations in equations 5, 7: provide the starting value (i=1 I guess) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: The main limitation of the method appears to be its applicability to different covariance functions, and the added computational cost, both of which are noted clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We have made several improvements to the paper, particularly in the experimental section, to address your concerns about clarity. Below you can find our answers to your questions and comments. Corresponding modifications can also be seen in the PDF submitted with this rebuttal. Should you feel that the steps undertaken to clarify the paper address your concerns, we would be very grateful if you would consider raising your final score. **Presentation of numerical results.** We have removed bolding in Tables 1 and 2, except for cases that are statistically best-performing. The new tables show the mean plus minus standard error and we clearly indicate this in the revised version. **Responses to questions.** 1. We have modified Figure 2 and the toy example. The main purpose of the toy example was to illustrate differences amongst how each method generated confidence intervals, as opposed to showing how well-calibrated they are. To this end, we deliberately picked a setting with little data and training inputs that are far apart, as this is where the difference between approaches is most obvious. However, we do not expect our method to be calibrated in this low-data region, as our method is only guaranteed to work in regions near training data (i.e. we assume that the test data and train data are drawn i.i.d. from the same distribution). We realize that this setting did not clearly illustrate the calibration properties of each method, which made Figure 1 confusing. In the submitted PDF, we include a revised Figure 1 that uses a different toy dataset to better illustrated the calibration differences between each approach. Please refer to the submitted PDF for details. 2. Table 1 only summarizes the average calibration error for all methods and datasets. A full table with detailed experimental results can be found in the supplementary material (it did not fit in the main text). We now state this explicitly in the table caption. Furthermore, we have moved Figure 2 to a different page from that of Table 1 to underscore that Table 1 is unrelated to Figure 2. 3. The training data is not added to the vanilla GP. This is now stated explicitly in the experimental section. 4. Please refer to point 1. 5. The GP posterior standard deviation is considerably small in some examples, which leads to low STD values. However, this of course comes at the cost of a very poor calibration score, as can be seen in Table 1. We felt that including this data in the tables in the main paper would likely cause some confusion, since the models in this case are simply overconfident, not sharp AND calibrated. We will include the scores for the vanilla GP and fully Bayesian GP in the supplementary material. We have also included this information in the PDF submitted with the rebuttal. 6. Thank you for this observation. The statement indeed holds for the more general class of stationary kernels, i.e., kernels where k(x,y) = K(x-y) for some function K. We have rewritten the statement accordingly in the revised version. --- Rebuttal Comment 1.1: Comment: Apologies for the slow response. I am mostly satisfied, and I think the new figure is much clearer. There are just two points I want to follow up on. 3 - Did you mean to write "calibration data" instead of "training data" here? I think that you should add the calibration data to the training data when training the vanilla GP (or with the fully Bayesian approach), because you (I guess) lose something by keeping some data back for calibration. I guess that this would not make much difference to your results. 5 - I do not think the results made it into the attached pdf ... ? You could add a markdown table in a comment. I think that this is generally very good work, and I am keen to raise the score, but I just need these points clarified. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and additional comments. We are very happy to see that we could address your previous comments adequately. 3 - Thank you for this observation. The squares denote both training and calibration data. We initially decided not to differentiate between both because we wanted to emphasize calibration quality over training quality. However, we realize this might be somewhat unclear, so we will use different symbols for the training and calibration data and clearly state this in the revision. Regarding the comparison with the vanilla and fully Bayesian GPs, it is safe to say that adding more data will improve sharpness since model confidence increases. However, it is generally difficult to say if calibration will improve, as the models always fully trust the (generally incorrect) GP prior. To test this, we reran several experiments in the setting where all data is used both for the vanilla and fully Bayesian GP, and observed that both calibration and sharpness improved, albeit only slightly. For this reason, in the revision, we will present the results for the vanilla and fully Bayesian models where all the available data is used to train them. This will be clearly stated and discussed. Furthermore, Figure 1c will be used exclusively to show how the base model performs before recalibration occurs and will be relabeled as "Base model (uncalibrated)". 5 - We accidentally omitted the results when generating the PDF. We apologize for this oversight. Below you can find the sharpness results (mean ± standard error) for the vanilla and Fully Bayesian GP, alongside ours. Please note that we do not have the data for the STD metric of the vanilla GP in the Facebook2 setting anymore and have not been able to rerun the dataset due to time constraints. This will be included in the revision. **Negative log-likelihood:** | | Boston | Yacht | Auto MPG | Wine | Concrete | Kin8nm | Facebook2 | |---|---|---|---|---|---|---|---| | Ours | 0.24 ± 0.1 | 0.6 ± 0.02 | 0.5 ± 0.01| 1.5 ± 0.07 | 0.91 ± 0.04 |-0.56 ± 0.0 | -1.2 ± 0.03 | | Vanilla GP | 0.73± 0.04 | 0.85± 0.06 | 0.5± 0.04 | 1.5± 0.03 | 1.2± 0.02 |-0.51± 0.02 | 0.51± 0.02 | | Full Bayes | | -1.2± 0.1 | 0.64± 0.2 | | | | **Centered 95% intervals:** | | Boston | Yacht | Auto MPG | Wine | Concrete | Kin8nm | Facebook2 | |---|---|---|---|---|---|---|---| | Ours | 1.2 ± 0.01 | 1.8 ± 0.03 | 1.6 ± 0.02| 4.7 ± 0.04 | 2.5 ± 0.04 |0.5 ± 0.02 | 1.5 ± 0.02 | | Vanilla GP | 3.2± 0.01 | 1.8± 0.05 | 1.6± 0.02 | 6.6± 0.3 | 1.7± 0.03 | 0.75± 0.01 | 1.9 ± 0.01 | | Full Bayes | | 0.37± 0.01 | 0.46± 0.01 | | | | **Standard deviation:** | | Boston | Yacht | Auto MPG | Wine | Concrete | Kin8nm | Facebook2 | |---|---|---|---|---|---|---|---| | Ours | 0.3 ± 0.03 | 0.46 ± 0.08 | 0.36 ± 0.004 | 1.2 ± 0.03 | 0.63 ± 0.09 | 0.14 ± 0.003 | 0.17 ± 0.02 | | Vanilla GP | 0.79± 0.02 | 0.45± 0.005 | 0.4± 0.002 | 1.7± 0.05 | 0.49± 0.005 | 0.23± 0.005 | | | Full Bayes | | 0.1± 0.001 | 0.12± 0.009 | | | |
Summary: This paper addresses the issue that the posterior variance of Gaussian processes are often poorly calibrated, typically underestimating quantile estimation. They propose a new method to calibrated uncertainty bounds, by training a quantity related to the posterior variance with new hyper parameters. This method further leverages optimization of the distance between the quantiles and the predictive mean, enabling sharp calibration. The method is tested on synthetic toy examples and standard UCI benchmark datasets, and appears to perform well against existing methods. Strengths: The paper is well written and presented and addresses a relevant problem, offering a principled solution. Weaknesses: The definition of a Gaussian process should specify that any **finite** collection of points are jointly Gaussian. The most concerning issue is that the paper appears to be dismissive of two recent competing methods (Song et al.; Kuleshov & Deshpande) in the introduction, and does not mention them again. Indeed, the current work has advantages over each of these methods in terms of its construction and theoretical guarantees, however comparisons to these methods in terms of performance, as well as some further discussion, should help demonstrate the benefits of the proposed method. For example, the current method still relies upon a reasonably large dataset size, so dismissing a competing method because it only has asymptotic guarantees is maybe premature. I would also question the presentation of the numerical results, in particular the use of both highlighting and boldface to denote best performing model in the tables. Since confidence intervals are provided, it may be advisable to bold the best performing ones, and using green to denote statistical significance that a single model is the best performing (i.e. multiple models can be bolded but none or one should be green). Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does the proposed method compare with those proposed in Song et al, Kuleshov & Deshpande, as well as Capone, Lederer & Hirche? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is little view for potential negative societal impact due to improving the calibration of uncertainty bounds for Gaussian processes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. We have made changes with your comments in mind as described below. If you feel that the changes sufficiently address your comments, we would be very thankful if you would consider raising your score. **Additional comparisons.** As you suggested, we compared our approach to that of Song et al. (2019), Capone et al. (2022) and Kuleshov and Deshpande (2022). We have included the results in the supplementary PDF. Note that the method of Capone et al. (2022) only computes uniform error bounds, i.e., 100 percent credible intervals, so we only considered this setting in the comparison. We noticed that training takes longer with the approaches of Kuleshov and Deshpande (2022) and Song et al. (2019) than our approach. This is because the optimization problem in our approach is fairly straightforward to solve, whereas Kuleshov and Deshpande (2022) and Song et al. (2019) require several optimization steps. Moreover, the approach of Song et al. (2019) is fairly involved and more difficult to implement than ours. We implemented the method of Kuleshov and Deshpande (2022) with the same neural network architecture suggested in their paper, trained over 2000 epochs. Our results suggest that the method of Kuleshov and Deshpande (2022) favors sharpness over calibration. While it achieves sharper intervals, it yielded significantly worse expected calibration errors than ours for most settings. Note that this is partially to be expected since their approach aims to achieve calibration in distribution, which only corresponds to quantile calibration if it is exact. Furthermore, it does not explicitly enforce calibration during training. We also noticed that it does not return valid intervals in some pathological cases (e.g., due to insufficient training steps). The method of Song et al. (2019) seemingly performs similarly to our approach in calibration, whereas it performed better in sharpness in some cases. However, we could only implement it on small data sets due to slow training. Furthermore, their method is fairly involved, and we relied heavily on the code kindly provided by the authors. Furthermore, although we employed the same test and training splits for the data as in our case, we noticed differences in the GP model used in their starting setup, potentially leading to significantly different starting models. We will ensure an identical setup for the revision. The method of Capone et al. (2022) is purely Bayesian and thus heavily dependent on the prior. We employed their approach as is from the code available online. The resulting credible intervals are well-calibrated, consistent with the results presented in their paper. However, our approach is much better regarding sharpness. This is because Capone et al. (2022) require symmetric intervals ($\text{mean} \pm b \times \text{standard deviation}$), whereas our approach allows for asymmetric credible intervals. Furthermore, our method does not rely on the prior distribution. **Presentation of numerical results.** As you suggested, we also removed the bolding everywhere except when methods perform significantly better, i.e., whenever the statistics indicate it is indeed best. In the revised version, we present the mean plus minus standard error and clearly state this to avoid confusion. --- Rebuttal Comment 1.1: Title: Response Comment: I have read your response, as well as the responses to the other reviews. I am happy to raise my score in view of this, however note that there appear to be a large number of changes that may make others uncomfortable accepting this paper without seeing a revised version of the document. --- Reply to Comment 1.1.1: Comment: Thank you kindly for engaging with us and for taking the time to reassess our paper. We agree that the paper should not change significantly compared to the initial submission. In the present case, we are confident that the final paper will be largely identical to the initially submitted paper: the most significant changes pertain exclusively to the last section of the paper (Section 7), and have been presented in the PDF with the rebuttal. The only changes made to the main paper outside of Section 7 are the discussions of Assumption 4.2 and interpolant in Eq. (6), and the inclusion of the measurement noise in the vanilla GP prediction. However, these changes are very small and do not involve changing the core method in any way.
Summary: The paper tackles calibration of Gaussian processes in regressions. The authors argue that while maximizing the evidence is a good way to choose hyperparameters to obtain an accurate posterior mean, it generally does not produces an accurate posterior variance. For this purpose, they propose a different way to obtain hyperparameters for the posterior variance which minimizes the length of centered confidence intervals, while constraining the parameters to provide accurate empirical quantiles. By replacing the constraint with a more treatable piece-wise expressions, they transform the problem into an unconstrained one. The further a practical algorithm and a theoretical result to obtain calibrated confidence intervals simultaneously over different confidence thresholds. Experimentation on toy and UCI datasets show the benefits of the method. Strengths: I think the paper is well written and clear. The solution proposed is reasonable and it tackles an important problem. The algorithms derived seem to be practical, and they are motivated by theoretical arguments that appear valid. Weaknesses: The authors bring up themselves that the core methodology they propose shares the same scalability issues as standard GPs, as it involves they inversion of a very large matrix. Hence, approximation methods are needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Not immediately clear to me what is the effect of replacing equation (5) with equation (6). I understand that the problem becomes easier to solve. Is the replacement, however, detrimental in some way? It is also not straightforward why the expression is (6) is the right one to use. I would encourage the authors to expand on this section. - It is not super clear how easily the inequality in Assumption 4.1 holds. Perhaps making some examples, and discussing limitations of this assumption, would make this part clearer. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work handles calibration for regression only. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and positive remarks. Based on your and other reviews, we have made several changes to improve the paper. Below you can find those that address your questions and comments specifically. Replacing (5) with (6) is not detrimental, as it still allows us to obtain a model that is sharp and calibrated. The choice of expression (6) is the simplest and most straightforward to compute. However, other forms of monotone interpolation are also possible without any loss of theoretical guarantees. In practice, the choice of interpolant eventually becomes of little relevance for high data sizes, since we only require small interpolation steps. We will include this discussion in the revised paper. Assumption 4.1 holds for any hyperparameter of a stationary kernel such that the Fourier transform of the kernel is monotonic with respect to that hyperparameter. This can be used to show, e.g., that the lengthscale of a stationary kernel satisfies Assumption 4.1 up to a scaling factor depending on the lengthscale (see Capone et al., (2022), Lemma 3.3). We will discuss this in more detail in the revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications. After reading the concerns raised by the other reviewers, I agree with them that some experiments might have been clearer. However, I believe the authors made an effort to address all concerns and evaluate against additional competing methods. Personally, I do not see major reasons why this paper should be rejected, in particular after the further experimentation. The problem that this paper tries to address is clear to me, and the methodology is novel and well motivated by the theory. The main Theorem, in particular, provides a coverage guarantee with seemingly mild assumptions. The final algorithm is not particularly involved and fairly practical, apparently achieving good results. --- Reply to Comment 1.1.1: Comment: Thank you very much for engaging with us during the discussion phase.
Rebuttal 1: Rebuttal: We kindly thank all the reviewers for their helpful comments. They have helped significantly toward improving our paper. Below you will find a summary of the changes made to the paper. We have also attached a PDF with additional experiments and the revised toy problem. **Presentation of numerical results.** We have changed the presentation of the numerical results. In particular, we have removed the bolding whenever statistical evidence is insufficient to deem one method superior. We now present the numerical results as mean+- standard error, as opposed to mean+-standard deviation, as this better illustrates the expected deviation from the presented mean. We have also modified the presentation of the toy example, as this caused some confusion. The toy example had been handpicked to illustrate the differences between methods concerning sharpness since all methods guarantee calibration. The new example presents calibrated results while clearly illustrating the differences between methods. **Comparison to new methods.** We have compared our approach to that of Song et al. (2019), Kuleshov and Deshpande (2022), and Capone et al. (2022). The results are presented in the PDF and detailed in the response to reviewer XLKC. **Assumption 4.1.** Assumption 4.1 can be shown to hold for kernels with Fourier transforms that are monotonically increasing with the hyperparameters. The revised paper shows how this can be leveraged to construct kernels that satisfy Assumption 4.1. Pdf: /pdf/5745351d63e1e4125ad3994c5bc2ee077c25f919.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improving Self-supervised Molecular Representation Learning using Persistent Homology
Accept (poster)
Summary: In this manuscript, the authors have developed an interesting self-supervised learning model, by the incorporation of persistent homology into contrastive learning module. More specifically, a special topological distance based contrastive loss is proposed. The model is novel, and the results are very promising. However, I have some concerns about the persistent homology analysis part. Strengths: The authors have developed an interesting self-supervised learning model, by the incorporation of persistent homology into contrastive learning module. More specifically, a special topological distance based contrastive loss is proposed. The model is novel, and the results are very promising. Weaknesses: The PH model is not explained clearly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) As a powerful tool in topological data analysis, persistent homology (PH) has demonstrated great power in molecular data analysis, PH-based molecular descriptors and fingerprints have already been extensively tested on various benchmark datesets and shown better performance than not only traditional molecular descriptors, such as ECFP, Morgan, daylight, etc, but also many deep learning models. Many related important references are not discussed in the paper, such as Z. X. Cang, Lin Mu and Guo-Wei Wei, Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening, PLOS Computational Biology, 14(1), e100592 (2018). Z. X. Cang and Guo-Wei Wei, TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions, PLOS Computational Biology, 13(7), e1005690 (2017). Z. X. Cang and Guo-Wei Wei, Element specific persistent homology for the analysis and prediction of protein folding stability upon mutation, Bioinformatics, 33, 3549-3557 (2017). Z. X. Cang, Lin Mu and Guo-Wei Wei, Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening, PLOS Computational Biology, 14(1), e100592 (2018). Duc Duy Nguyen, Zixuan Cang, Kedi Wu, Menglun Wang, Yin Cao and Guo-Wei Wei, Mathematical deep learning for pose and binding affinity prediction and ranking in D3R Grand Challenges, Journal of Computer Aided Molecular Design, 33, 71-82 (2019). Duc Nguyen, Zixuan Cang, and Guo-Wei Wei, A review of mathematical representations of biomolecular data, Physical Chemistry Chemical Physics, 22, 4343-4367 (2020). Xiang Liu, Xiangjun Wang, Jie Wu, and Kelin Xia, "Hypergraph based persistent cohomology (HPC) for molecular representations in drug design." Briefings In Bioinformatics, 22 (5), bbaa411 (2021) Xiang Liu, Huitao Feng, Jie Wu, and Kelin Xia, "Dowker complex based machine learning (DCML) models for protein-ligand binding affinity prediction." PLOS Computational Biology, 18(4), e1009943 (2022) Chi Seng Pun, Si Xian Lee, and Kelin Xia, "Persistent-homology-based machine learning: a survey and a comparative study." Artificial Intelligence Review, (2022) D. Vijay Anand, Qiang Xu, Junjie Wee, Kelin Xia, and Tze Chien Sum, "Topological feature engineering for machine learning based halide perovskite materials design", npj Computational Materials, 8 (203) (2022) 2) Mathematically, the filtration process from PH will only generate a nested sequence of simplicial complexes! In Figure 1, the plotted “chemical subgraphs” (during the filtration process) are not the general simplicial complexes (Vietoris-Rips complex or Alpha complex), as there are double bonds. Note that a double bond is illustrated as two edges which appear simultaneous among two vertices. This representation is not mathematically rigorous! The authors can use the common graph but add edge features to denote the double bond! Note that it is possible to use cellular complex to represent “double bonds”, but their persistent homology will be different! 3) Page 3, line 100, their filtration ends with the original graph. In this way, their Betti_1 bars will never die, as there are NO 2-simplexes in their model? The authors are suggested to double check this setting. More discussion will be given below. 4) Page 3, line 106, the authors mention that they use $H_k (G_i)$? They should specify what is the range for the integer $k$. Further, in their topological loss, do they consider both the Betti_0 and Betti_1, or just Betti_0/Betti_1? 5) Page 5, line 209-212, “The fingerprints are usually constructed in a way so that they capture information about the molecular graph structure and sometimes additional domain knowledge; even if they do not capture the entire complexity of the molecules, they represent some, probably important aspects.” Even though PH models are important, they can only characterize homological information, such as individual components, circles, voids, cavities. In biomolecular, the Betti_0 is usually related to covalent bonds, Betti_1 is related to pentagon (sugar ring) and hexagon (benzene ring). These findings have already been widely known, many references can be found, Kelin Xia, Xin Feng, Yiying Tong and Guo-Wei Wei, "Persistent homology for the quantitative prediction of fullerene stability." Journal of Computational Chemistry, 36, 408-422(2015). Kelin Xia and Guo-Wei Wei, "Persistent homology analysis of protein structure, flexibility and folding." International Journal for Numerical Methods in Biomedical Engineering, 30(8), 814-844(2014) Zhenyu Meng, D Vijay Anand, Yunpeng Lu, Jie Wu, and Kelin Xia, "Weighted persistent homology for biomolecular data analysis." Scientific Report, 10 (1), 1-15 (2020) The authors are suggested to add an example or some more discussions to explain what the information is captured in their PH models. Further, the general filtration is usually based on atomic distance or some special weighted distances, in this way, higher-dimensional simplicial complexes are generated! And both Betti_0, Betti_1, and Betti_2 have their clear biological meanings! If the filtration process ends with the original graph (as illustrated in Page 3, line 100), there are no Betti_2 information, and Betti_1 bars will never die! The authors are suggested to compare with these existing approaches to show their advantages or add some more discussions about the difference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Missing many important related literature. The advantage of their filtration process is not clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for carefully checking our theory part!** We are sorry for the unnecessary confusion caused by the missing details. We hope that the additional explanations resolve the points mentioned, esp. also Q1 and Q5, which are part of the global reply. In particular, please note that we did not intend to propose a particular filtration function rather give an example. We consider this to better be left to the expertise of PH experts and dependent on the actual application. **Clarification on Theoretical Background** - Q2: The “double bonds” in Figure 1 serve as a graphical illustration of the edge features rather than part of the actual filtration. In our experiments, the presence of a double bond is also treated as an edge feature, which is clearly stated in Appendix A. We realize that this is confusing in the context where the filtration is introduced. - Q3: We intended to introduce a simplified scenario and give the more complex details about the version we actually used (an extended persistence module [1]) in the appendix, but later forgot to add a comment and the latter details. The extended persistence module [1] captures Betti_1 features, ensuring that they will also be killed in the end. - Q4: k is either 0 or 1. In TDL we use both in concatenation. We updated all these accordingly now. **Q5 Topological Features in Embeddings** In regular molecule representation learning, it is indeed possible to draw rather direct conclusions about what is or can be captured in the embeddings if we use very straightforward filtration methods (e.g., no multi-parameter filtrations, which have been shown to be very powerful recently). However, only the TAE baseline we consider actually is optimized for learning PH-based embeddings. TDL has a more abstract goal and uses the information captured by the topological fingerprints only indirectly, for regularization. We do not expect the learnt embeddings to explicitly capture topological features and therefore did not include such examples into the paper, since this might be misleading. We will add more discussion about the topic more generally (e.g., for TAE, and also topological views for CL could capture such features) to underline the potential PH may offer in SSL. This also fits well with the proposal of Reviewer XtrW to mention alternative, potentially useful architectures. --------------------------------------------------------- [1] Cohen-Steiner et al. "Extending persistence using Poincaré and Lefschetz duality." Foundations of Computational Mathematics 9.1 (2009): 79-103. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I have no further comments. --- Rebuttal 2: Title: Thank you so much! Comment: This final confirmation is very helpful and highly appreciated.
Summary: Paper uses self-supervised learning tools for graph representation learning by facilitating topological data analysis (TDA) methods. In particular, for molecular representation learning, the authors use persistent homology outputs to improve the embeddings obtained by GNNs. They evaluated their model in molecular property prediction problem, and consistently got performance improvements. Strengths: GNNs and TDA are both very successful and completely different methods in graph representation learning. In the past years, there are several approaches to integrate these two methods effectively. With this aim, the paper proposes a new way to use TDA output to improve node embeddings in GNNs by using contrastive learning ideas. The idea is novel and has a lot of room for improvement. Molecular Representation Learning is a significant application area for graph representation learning. The authors applied their model in this domain, in particular, molecular property prediction. They obtain strong results on this important question. The paper's experimental part and ML details are strong. The authors made an in-depth analysis of the model from various angles. Weaknesses: The experimental results (Table 4) do not show significant improvements in several cases. The results only report the performance of internal models. It would be nice to see the comparison with the SOTA results on these datasets. PH construction seems weak as it does not use clique complexes, and only uses nodes and edges in the filtration, i.e., the top dimension is set to be 1. This filtration are not commonly used in graphs as it reduces PH to only node and edge counting by using a simple Euler Characteristics argument. However, fortunately, this does not affect their performance in this setting since molecular graphs are planar and do not have loops of length 3, as all loops have length $\geq 5$. The authors should add a note for nonexperts that for molecular graphs, this trivial filtration setting is equivalent to the traditional clique complex setting for sublevel filtration because of the special structures of molecular graphs (no cycle of length 3). For TDL, PI is a good choice, but for TAE, it looks weird. To be used in such a loss function, there are better stable PH vectorizations, e.g., Silhouette, landscape. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Instead of using PH output in the loss function to improve GNN embeddings, did you consider directly combining them, e.g., by simply concatenating PIs with GNN embeddings? I know this is a completely different approach, but this is a more direct method to combine both outputs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: There are various choices to be made in several places for the model. On one side, it gives flexibility to adapt the model to different settings, but on the other hand, it needs expertise in several domains for fine-tuning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the insightful comments!** W1 and W2 are addressed in the global reply. We hope that especially our explanation there and the additional results clarify our proposal. The suggestion of comparing to SOTA also in our extra experiments (W2) was very helpful and underlines our contribution. Please let us know in case any questions are left! **W3 Clarification on PH Construction** This is a good point, and we added that to the paper. We admittedly dropped many details since we were afraid that too much theory on persistent homology may prevent readers and ML researchers from seeing and getting convinced of its most important features. In fact, to the best of our knowledge, the theory is only more recently considered in the field; e.g., in the context of graph representation learning, graph homology has been studied in more detail in [1]. We did not intend to propose TAE as an architecture but rather as a means to compare to and obtain and estimate for the usefulness of PIs in an SSL setting, since we apply them for TDL (see also the response to Reviewer 9rP5, W1). We will mention this more clearly. **Q1 What about Concatenating PIs with GNN embeddings?** This is a valid architecture proposal, and we also think that using PH for constructing views deserves further study. Since we consider the distance-based approach we propose with TDL to offer unique advantages for existing models (i.e., by incorporating regularization in terms of the relations between the input graphs, which the models do not explicitly consider), the possibility to improve a number of those by applying our loss on top, and its potential for follow-up research (see also the response to Reviewer bcsi), we chose to focus on this one in this very initial paper. **Limitation: Variety of Design Choices** While this can be considered as a limitation, we think that it rather shows the potential of the research direction we propose. Further research is definitely needed before the possible architectures can be reliably used in practice. However, the many works on PH in chemistry (see also the references suggested by Reviewer recC) show that the domain is convinced by the usefulness of topological fingerprints already and we therefore believe that this kind of knowledge should be considered in SSL, assuming that the latter becomes more important with the advancement of foundation models. We now added some text about this to the paper to clarify the intention of our work, to motivate further study, and to also avoid confusion as it appeared in other reviews (e.g., that we recommend to use a specific filtration function, what we don't). ------------------------------------------------------------ [1] Rieck, Bastian. "On the Expressivity of Persistent Homology in Graph Learning." arXiv preprint arXiv:2302.09826 (2023). --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed answers and additional experiments. I have no further questions. Good luck with your submission. --- Rebuttal 2: Title: Thank you for getting back to us! Comment: We highly appreciate the careful and positive evaluation and also the encouraging response!
Summary: This paper proposes two molecular self-supervised learning methods, which consists of fingerprint autoencoder and topological distance contrastive learning. The insight behind this paper is to utilize topological fingerprint as a supervision in self-supervised learning. Thus, the authors reconstruct the topological fingerprint of a given molecule with autoencoder and filter out similar molecules in negative views in contrastive learning based on the similarity in topological distance space. The experimental results show that their method improves previous baselines in various downstream tasks. Strengths: - The paper is well written and easy to understand. - The experimental results are comprehensive; the authors considers several setups such as linear probing and fine-tuning. Weaknesses: - Lack of Novelty: Excluding similar molecules from negative sample set is already considered in [1]. Conceptually, the difference of TDL and [1] is that TDL utilizes PH and [1] utilizes ECFP fingerprint (I know that the loss of [1] is based on augmented molecules, but I think this does not make big difference). This limits the novelty of this paper. - Table 1 does not support the effectiveness of proposed method: Correlating the distance in embedding space with the distance in corresponding PIs are not the main purpose of molecular representation learning. If PIs are indeed very important, then why should we use learned representation of proposed method? Can't we just utilize PIs as the molecular representation? In other words, Table 1 and Table 17 seem to contradict. - Insufficient rationalization of the usage of PIs: In molecular domain, ECFP fingerprint is a widely applied molecular representation since it reflect the substructure-wise molecular information. Why should we use PIs in molecular representation learning? - Flexibility of TDL: The authors insisted that TDL can be flexibly and efficient applied with any graph contrastive learning framework. However, any other two existing methods can be composed with each other to improve the performance. For example, ContextPred + GraphCL is possible and the flexibility is not the unique feature of TDL. - Table 4 seems weak: TDL (or TAE) combined with existing method does improve the overall performance. However, Mole-BERT and SEGA shows better performance than the proposed method. ----Sorry for confusion. I added the reference. [1] Improving Molecular Contrastive Learning via Faulty Negative Mitigation and Decomposed Fragment Contrast, Wang et al., JCIM 2022 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In Table 4, why some baseline methods are combined with TAE while others are combined with TDL? Can't TAE and TDL combined jointly? - How is the performance of TAE (or TDL) jointly trained with SEGA? Does this improve SEGA itself or SEGA + other method (e.g., SEGA + ContextPred)? - I would be convinced with the experimental results if the authors compare "TAE + TDL" vs other methods (Please refer "Flexibility of TDL" in Weakness). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: Yes. The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for pointing out this indeed very related paper!** We should not have missed such closely related work, and we hope that the below delimitation resolves some of the related issues pointed out in your review. In fact, the detailed comparison highlights the novelty of our work. W3 and Q2 are addressed in the global reply. **W1 Clarification on Novelty: Comparison to [1]** - **Our paper's focus.** We study the benefits of PH for molecular SSL and propose technology that is suitable for incorporating PH into SSL, rather than just a loss that excludes "false" negative examples. - **We do not consider views.** The fact that we focus on input examples represents a considerable difference. While the similarity between rather similar molecules, such as views, might be captured by ECFP, the latter turned out to be not as effective as PIs in TDL (Tab. 15). We hypothesize that this is due to the fact that input examples that are overall different but have structural similarities might have too similar ECFPs for a fine-grained distance regularization. - **We filter the graphs based on the distances.** We experimented with weighting as used in [1] in the beginning but our method showed better performance in our setting. The weighting might work with ECFP since these fingerprints are discrete and hence rather coarse grained, while it may give too confusing signals to the model with PIs. - **Pretraining Data.** [1] used ~10M molecules. We reran their model for a fair comparison. - **Results.** With TDL, we obtain similar improvement in fine-tuning and linear probing as with the other models we considered. | | Tox21 | ToxCast | Sider | MUV | ClinTox | HIV | BBBP | Bace | Average | | --------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ------- | | [1] | 75.1 (0.7) | 63.5 (0.4) | 59.4 (1.0) | 74.7 (1.9) | 81.0 (2.6) | 77.3 (1.2) | 69.6 (1.2) | 77.3 (1.0) | 72.24 | | [1] + TDL | 75.9 (0.6) | 63.7 (0.3) | 60.7 (0.8) | 75.1 (1.3) | 83.8 (1.9) | 76.7 (0.7) | 71.2 (0.9) | 78.5 (1.3) | 73.20 | | [1] | 68.8 (0.4) | 60.4 (0.3) | 57.5 (1.1) | 59.3 (2.2) | 73.3 (2.1) | 67.5 (0.7) | 63.8 (0.8) | 7.9 (1.2) | 65.43 | | [1] + TDL | 69.8 (0.3) | 61.1 (0.4) | 59.0 (0.4) | 61.7 (1.7) | 72.8 (1.0) | 69.6 (0.8) | 64.4 (0.4) | 74.7 (0.9) | 66.64 | **W2 & W5 On the Demonstration of Effectiveness** Assuming the filtration functions are carefully chosen and possibly include external knowledge, PIs will likely offer useful features for molecular representation learning (see also the references suggested by Reviewer recC). Nevertheless, our paper does not intend to compete with the existing body of graph SSL research and outperform specific SOTA approaches, but rather *improve* it by exploiting the unique features of PH in a complementary way. This is also why our initial work on the topic focuses on TDL rather than a custom, standalone CL-based loss, where PH could be used for constructing views, which would likely be a better model than the simple TAE. *Table 1 shows that TDL is effective in slightly moving the molecule embeddings in the embedding space towards the structure of the PI space*. Learning both embeddings that are rich in information, to fit multiple possible downstream scenarios, and a well-structured embedding space are especially important in SSL (which has more specific requirements than regular molecular representation learning). Re SOTA, please note our discussion of fine-tuning experiments in SSL in the global reply, our results with stronger filtrations, and the improvement we obtain for AD-GCL. **W4 "Flexibility" of TDL** It is true that other approaches can be combined as well. What we mean is that TDL covers a dimension which is not addressed by the regular models. Hence, "flexibly" is intended to convey the fact that it clearly adds a novel form of regularization which is likely effective. We did not want to cause confusion with that wording, which is indeed not crystal clear, we are definitely open to alternative suggestions. **Q1 & Q3 TAE + TDL?** Since TDL's objective is a direct consequence of TAE's objective we do not expect particular benefits of this combination. In the paper, we only ran TAE in combination with ContextPred to obtain an idea of its performance in such combinations, but this is more intended as a side experiment. In combination with other CL-based approach, TAE's objective would be rather strict in that it enforces direct embedding similarity, which might contradict the other model's objectives. TDL is more general (and in a certain sense flexible) in that it regularizes adaptively. As noted above (W2 & W5), standalone TDL is not designed for SOTA comparison since it is clearly missing CL representation power by not considering views. We hope that our explanations provide clarification in this regard and that our additional experimental results help to justify our contribution. In case there is doubt left, please let us know. For analysis purpose, we ran the experiments and the results largely match our expectations. Yet, standalone TDL is surprisingly effective. | | Tox21 | ToxCast | Sider | ClinTox | MUV | HIV | BBBP | Bace | Average | | --------------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ------- | | TDL | 75.8 (0.5) | 62.1 (0.5) | 62.2 (0.9) | 79.1 (3.8) | 75.2 (2.3) | 76.9 (0.9) | 66.5 (1.8) | 78.4 (1.1) | 72.02 | | TAE+TDL | 76.2 (0.3) | 62.9 (0.4) | 60.6 (1.0) | 81.7 (1.8) | 74.2 (1.5) | 76.2 (1.1) | 67.4 (0.8) | 83.0 (1.5) | 72.78 | | TAE+GraphCL+TDL | 76.0 (0.3) | 63.7 (0.4) | 62.6 (0.6) | 82.8 (2.3) | 75.4 (2.3) | 77.4 (0.6) | 69.8 (0.6) | 81.8 (0.9) | 73.69 | --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: First of all, thank you for providing the discussion about the points I mentioned. --- **[W1] Novelty (comparison to [1])** - Our paper's focus: I do not agree with this claim. Even though the authors did not intend the same effect of [1], the loss function is almost the same. I think this significantly limits the novelty of this work. - We do not consider views & We filter the graphs based on the distances: As I mentioned in the original review, I'm aware of these slight differences. However, at least for me, this makes no much difference. As far as I understand, [1] repels the representations of distant molecules in terms of ECFP, while this work repels the representations of distant molecules in terms of PI. If this is not true, please correct me. **[W2 &W5] & [W4] "Flexibility" of TDL** - "flexibly" is intended to convey the fact that it clearly adds a novel form of regularization which is likely effective: I do not agree with this claim. Similar to the contrastive learning objectives in conventional molecular representation learning (e.g., GraphCL, JOAO), TDL can also applied as a standalone objective (there is no specific reason for TDL to be only utilized as a regularization). If the authors want to claim that TDL is effective, the tables should have been designed as GraphCL vs JOAO vs TDL vs other molecular pretraining methods. I think my concerns have not been resolved, and I would like to keep my score. Please let me know if I have misunderstandings. Thank you. --- Rebuttal 2: Title: Thank you for getting back to us! Comment: We indeed believe that there is some misunderstanding and try to clarify. **W1 Novelty in Comparison to [1]** - We focus on **a rather different loss function**\ Please see the denominator, we apply a "filter" rather than weights. Subtle changes can have huge impact in ML and entire papers have been written about this kind of seemingly small adaptations. In the appendix, ablation experiments show that *this is the appropriate method for topological distances* (vs. ECFP, as used in [1]), which is the area we want to study in the context of SSL. - **The nature of our work is different**\ Note that [1] appeared in a chemistry journal and **the analysis in [1] focuses on aspects such as explainability which are most interesting for chemists, while our paper has a clear technical, ML focus** and investigates aspects which are relevant in this field: - we introduce *persistent homology, a mathematical method with well-known theory*, to molecular SSL - we consider *various, popular baselines to prove the generality* of our work - we provide *extensive linear probing* experiments - we show considerable improvements in the latter and in small data settings for all baselines which, to the best of our knowledge, have *neither been considered nor obtained similarly in any related work on molecular SSL* - Citing the **reviewer guidelines** > Originality: Are the tasks or methods new? Is the work a *novel combination of well-known techniques? (This can be valuable!)* Is it clear how this work differs from previous contributions? Is related work adequately cited > We acknowledge that the latter two points were missing in the submission, but adding the citation and three sentences describing it will neither change the nature of our work nor its original contributions, which were clearly recognized by other reviewers. --------------------------------------------------------- **W4 "flexibly"** To resolve the concern that we are misrepresenting our method, we can certainly remove that word. --------------------------------------------------------- **W2 & W5 Standalone TDL** As any other loss function, TDL can be applied as standalone objective. Yet, as the paper's title points out, our submission's instantiation (without views) and evaluation **focus on investigating how PH can *improve* existing methods based on its complementary nature**. The suggested study using views is interesting follow-up work which definitely should consider the various other potential benefits of PH. --------------------------------------------------------- Lastly, we note that "Reject" means, "a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations." We completely do not understand this rating and hope that the above delimitation helps clarifying our work's contribution. **Thank you very much for getting back to us with the remaining concerns and for being open to further discussion!** --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response. I have a further question. Even if I admit that this method is intended to **focus on investigating how PH can improve existing methods based on its complementary nature**, then I think that the effectiveness of this method should be verified by the compositions of other methods. Every molecular representation learning method has its own perspective to improve the performance. For example, GraphCL can be viewed as a method that **focuses on how discriminating similar and different view can improve existing methods based on its complementary nature**, upon ContextPred. Therefore, in my point of view, Table 1,2,3,4 do not provide a meaningful insight about **how PH can improve existing methods**. In other words, there is no baselines compared in those tables and the comparison should be "ContextPred + TDL" vs "ContextPred + GraphCL" vs compositions of other methods. Therefore, I think my concerns are not fully resolved and I would like to keep my score. Thank you. --- Rebuttal 3: Title: Author Response Comment: Thank you for providing more details, we'll try to clarify. We decided to focus on the distances between **samples**, since **this dimension of the problem is completely neglected by existing CL methods focusing on views**. We believe this aspect bears most novelty. The results in Tables 1,2,3,4 target this topic. We agree that it is likely promising to also apply PH to create views and we are actually focusing on this topic in our follow-up research. Reviewer XtrW mentioned this direction as well, as a "completely different approach" that deserves further research. There are many more potentially interesting topics which could exploit the complementary nature of PH, so we had to choose one to start with. It is very unlucky if our writing caused confusion. - If "complementary nature" is misleading in your opinion, we are definitely open to change the wording. - If GraphCL vs. GraphCL + TDL does not show that TDL "improves" GraphCL, we can change "improve" to an alternative notion (maybe "complements") which better describes our methods goals. We hope that the confusion which seemingly came from two words does not impact the recognition of our actual contributions. ------------------------------------------------------ In order to address your concerns, we ran "ContextPred + GraphCL" to provide some more comparison, we tried "ContextPred + TDL" before. However, combining such very different kinds of models turned out challenging and the results would need further and careful tuning for both models to provide real insight. This is probably also why this kind of model combinations is usually not considered in the literature. --- Rebuttal Comment 3.1: Comment: Thank you for detailed response to alleviate my concern. Indeed, in contrast to the authors' claim, several CL methods focusing on view studied about the distances between samples. For example, [1] utilizes "hard negative" samples to improve the performance of molecular representation learning. This paper and [1] introduce opposite objective: This paper discriminates "distant" molecules while [1] discriminates "nearby" molecules. If these two approaches were carefully analyzed in the manuscript, I might have agreed on the novelty that the authors argued. However, in the current manuscript, the claimed novelty seems not well-supported. [1] Molecular Contrastive Learning with Chemical Element Knowledge Graph, AAAI 2022 --- Rebuttal 4: Title: Author Response Comment: This is a fair and very critical point. Our above statement is too absolute. We are sorry, this happened in the heat of the moment and was truly without intention. **Related Work.** In fact, [1] mentioned in your initial review already considers samples to some extent. In our updated related work section, next to the works from the reviews, we also had incorporated some other works using hard negatives and correlation (i.e., in SSL more generally). Note that the ToDD paper, to which we provide a detailed delimitation in our submission, uses hard negatives as well. **Our approach is different.** The ToDD paper and some others use hard negatives in the supervised setting and hence have label information available to, in a sense, safely select the samples. In the paper mentioned and in the other SSL works we found, hard samples are used to build the batch and hence to shape the nominator in the CL loss (i.e., the distances between views). In contrast, we have pairs of samples in the denominator and explicitly model the distance between those. We can definitely add more discussion to support our novelty. In our preliminary experiments, we weighted samples in the nominator of the regular, view-based CL loss, similar to [1]. But TDL showed better performance. Furthermore, we hypothesize that, in our setting, pushing away similar samples might be critical conceptually since they may still have similar properties. Therefore we focus on pushing away truly negative samples. The closest related work in this sense is probably [1], and we have provided a detailed written and experimental comparison to that in the rebuttal. **Our paper demonstrates novelty** in showing that TDL suits PH in that it can greatly complement a great number of CL approaches and improve performance in various interesting settings. Selecting hard negatives based on PH would certainly be another possible application of PH in SSL, similar to the views based on PH, discussed previously. We will add this to the more general discussion about the potential of PH for molecular SSL. [1] Improving Molecular Contrastive Learning via Faulty Negative Mitigation and Decomposed Fragment Contrast, Wang et al., JCIM 2022
Summary: The paper proposes two approaches to leverage topological information (obtained from persistent homology) for molecular representation learning in a self-supervised setting. The first (TAE) uses an encoder-decoder architecture whose decoder aims to recover topological fingerprints. The second approach (TDL) consists of a contrastive loss based on the similarity between topological fingerprints. The latter is combined with existing contrastive learning methods. Experiments on linear probing and downstream prediction tasks show the efficacy of the proposals. Strengths: - Ablation studies: There is a substantial number of experiments and ablation studies. - I like the simplicity of the proposed approach. - Flexibility: TDL can be combined with most SSL approaches. Weaknesses: - Overall, I believe the paper provides limited insight to support the proposals. Also, it does not discuss which structural information the proposed approach captures but not existing methods. From a conceptual level, we know that 1-WL GNNs cannot capture information even from simple homology (e.g., number of independent cycles of a graph). Thus, TAE has inherent limits/failures. In other words, the topological information we loose after pushing a graph through a GNN (which would be captured by TDA) cannot be recovered from GNN embeddings. - Results on downstream tasks: Based on Table 4, the gains from TDL look marginal. The gain is less than one standard deviation from the base model for many datasets. - Incorporation of domain knowledge: The claim that the proposal allows for incorporating domain knowledge seems overstated. The basis for such a claim comes from the choice of the filtration function. However, it is unclear how different filtration functions affect the topological embeddings --- thus, domain experts cannot leverage their knowledge to choose the filtration functions. - TAE vs. TDL...which one should we use? The paper says that "TAE, which we developed for comparison purposes only..." (line 283). I am unsure whether TAE should be introduced as a main contribution or as a baseline (in the experiments) for assessing the feasibility of learning the topological fingerprints with a simple architecture. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could the authors elaborate on why the fact that 'the Euclidean distance between PIs is stable with respect to the 1-Wasserstein distance between PDs' (line 111-112) is relevant here? Don't we want stability wrt to the input graphs? 2. What does the paper mean by calibrated distances? 3. Can the proposed methods be extended to employ learnable filtration functions? 4. A significant part of the experiments is devoted to showing the alignment between the learned molecular representations and the topological fingerprints. Isn't it naturally expected from the proposed design (e.g., additional loss term)? 5. Could the authors elaborate more on the fact that TAE can capture inter-molecule relationships if they learn the PIs? Can't GNN embeddings also learn important structural information and be stable? 6. Have the authors considered applying only TDL without other loss terms from CL methods? If yes, how well does it work? 7. The paragraph 'linear probing' (line 281) says 'we evaluated extensively using MLPs on the representations of the pre-trained graph encoders...'. Shouldn't this employ linear models instead of MLPs? 8. Duplicated text, e.g., - Lines 1-5 --> 20-23 - Lines 24-31 --> 199-206 - Lines 5-6 --> 37-38 9. Typos: - GraphCL propose (line 32) - JOAO extend (line 33) - and to exploit (line 65) - TDL provides as a form of regularization (line 233) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors mention limitations in the main paper (section 5). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the very detailed feedback!** We reformulated the particular benefits of PH for molecular SSL below and also adapted the paper. We hope that this clarifies the initial confusion about our contribution, please let us know in case further details are needed! W2 and W3 are addressed in the global reply. And for W1 and W3, please also see the reply to recC, Q5. **W1 & W4 Clarification on TAE** Observe that TAE is encouraged to directly learn the PI via the objective function, hence it does not necessarily have to reproduce the algorithmic procedure. To verify its ability, we ran preliminary experiments applying the pretrained, not finetuned TAE for PI prediction on the evaluation data. The Pearson correlation coefficients between the predicted and real PIs show that it approximates the latter PIs fairly well: | | Tox21 | ToxCast | Sider | ClinTox | MUV | HIV | BBBP | Bace | | ---- | ------ | ------- | ------ | ------- | ------ | ------ | ------ | ------ | | TAE | 0.8572 | 0.7744 | 0.5939 | 0.8642 | 0.9044 | 0.7359 | 0.8660 | 0.8514 | Moreover, TAE is intended as simple and straightforward baseline for PIs in SSL, in the way the area uses the baselines of Hu et al. as context for interpreting results. In particular, since TDL does not directly encourage the model to learn PIs but rather the relations between them, the comparison to TAE allows us to evaluate the effectiveness of this rather abstract goal. Nevertheless, note that our comparison is only coarse since it is based on numbers (i.e., instead of on the actual predictions). A similar, even closer comparison could be drawn by considering regular CL using the PIs as embeddings, as outlined by Reviewer XtrW. **Q1 & Q5 Relevance of Stability of Topological Fingerprints, and Relation to TAE** Stability w.r.t. the input graphs is definitely the goal, but we need an appropriate metric for the data space. There is no unique such metric for graphs, and we are only aware of few such works (e.g., a recent paper proposes a custom tree-mover's distance [1]). Our paper's hypotheses are that - the stability certain topological fingerprints offer is a mathematically grounded, well-studied, and efficient proxy for the stability w.r.t. graphs, which we can use complementary to existing approaches; - stable representations support the learning of a well-structured embedding space, which is particularly important for SSL; - the fact that they are generic in the filtration function gives domain experts the opportunity to flexibly inject their knowledge, which particularly suits molecular representation learning (see also the references mentioned by Reviewer recC). Altogether, this motivates us to study persistent homology in the context of SSL over molecular graphs. Assuming TAE learns PI-based representations sufficiently well, then the distances ("relations") between its embeddings reflect the ones between the corresponding input graphs in terms of the 1-Wasserstein distance between their persistence diagrams. As for example [1] show, stability can indeed be defined for GNN embeddings. Our paper is intended to complement this research by investigating how we can leverage the body of existing works in persistent homology in the context of graph representation learning and, in particular, in molecular SSL. **Q2 What does the paper mean by calibrated distances?** TDL intends to "adjust" the distances between the embeddings such that they better reflect the ones between the PIs. In this sense, they get calibrated. We are aware that the notion is not perfect but did not find a better one. We are definitely open for suggestions! For now, we added this explanation to the paper. **Q3 Can Filtration Functions be Learnt?** While this should be possible in general, it might be expensive in an SSL setting since the topological fingerprints then have to be reconstructed in each epoch. Furthermore, in the context of our TAE and TDL, the objective functions are not clear. **Q4 Why Experiments showing Architecture Alignment?** While we indeed designed our architecture as carefully as possible, we do not think that such design intentions or even theoretical architecture guarantees have to necessarily translate into practice (e.g., GIN has been shown to be very expressive, but there are datasets where the conventional GCN is superior [2]). Therefore, we consider these experiments showing alignment of the architecture as considerable contribution, in particular, also because the relation to the topological fingerprints offers various topics for future investigation (see also the response to Reviewer bcsi). **Q6 What about TDL w/o other Losses?** Given that TDL does not incorporate the regular, proven views used in CL, we do not recommend this setting. *We did some such experiments, interestingly obtaining quite good performance, now* (see U3kd, Q1 & Q3 TAE + TDL?). Note that regular CL views could certainly be constructed based on PIs or other topological fingerprints. In our initial study, we chose a different, more novel focus, but we are investigating alternative options now. **Q7-Q9 Minor Comments** Thank you for checking on this level of detail! The "MLP" in the context of linear probing is indeed a mistake, we used a simple linear layer. We also fixed the remaining items. ---------------------------------------------------------------------------------------- [1] Chuang, et al. "Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks." Advances in Neural Information Processing Systems 35 (2022): 2944-2957. [2] Dwivedi et al. "Benchmarking graph neural networks." JMLR (2023). --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to answer my questions and comments. While some of my concerns were cleared, I still believe the paper provides little insight to support the proposal. In an effort to address my conceptual question, the authors provide a table showing correlations between PIs and the GNN approximations (varying from 0.59-0.90). This is insufficient and does not strengthen the motivation for the proposal. Overall, I think the contribution is not theoretically grounded nor builds upon solid claims. Also, I believe the claim that the proposal allows for incorporating domain knowledge seems overstated. In their reply, the authors have run additional experiments with a "stronger filtration function" with "more domain knowledge". What does stronger filtration mean here? --- Is it capable of capturing topological information that the previous one couldn't? What does a filtration function say to domain experts, and how can they choose the best filtration? I acknowledge that I have read the other reviews and authors' responses. Since some of my concerns were alleviated, I am increasing my score from 4 to 5. --- Rebuttal 2: Title: Question by Authors Comment: Dear Reviewer 9rP5, Since your initial review pointed out several detailed questions and concluded with a slightly negative overall rating, please let us know in case there are remaining concerns which we can address. Thank you again for providing that much feedback. Your recognition of the paper's strengths is very valuable! --- Rebuttal 3: Title: Thank you! Comment: Thank you for acknowledging our rebuttal and for getting back to us! We try to clarify below. **C1 Theoretical Grounding of Contributions and Claims** As described in the paper, TAE is intended as simple, straightforward baseline, without specific theoretical grounding. Topological fingerprints have shown promising results in several past works, and TAE is just trained to predict those. In this way it also likely loses aspects of the molecules, which are not captured in the topological fingerprints. TDL is theoretically grounded in that our objective is based on the stability of the topological fingerprints. Molecules which have more similar PDs are moved closer to each other in the embedding space. Since the training reduces the loss function, as stated in your review, our design models the theoretical contribution to some extent. Our empirical evaluation tries to complement that. Since the representations after pre-training capture the learnt knowledge clearest we have paid special focus on linear probing and small datasets, and we see strong improvements there. We think the distance probing experiments are strong since they show that the embeddings make the model capture distance to a certain extent, which is an important capability (see also the rebuttal for bcsi). However, we also applied k-NN on the PIs (i.e., on the raw fingerprints, not on embeddings; not presented in the paper/rebuttal so far), to **verify that the relations between the PIs in fact capture some information and can be used for supervision and hence to build upon solid claims**. Maybe this is a more direct demonstration, in the way you had in mind. We see that they seem to capture nearly as much - and likely different - knowledge as ECFP. | | Tox21 | ToxCast | Sider | ClinTox | MUV | HIV | BBBP | Bace | | ---- | ----- | ------- | ----- | ------- | ---- | ---- | ---- | ---- | | PI | 58.8 | 51.1 | 58.7 | 50.9 | 50.2 | 64.8 | 55.2 | 75.5 | | ECFP | 63.8 | 54.6 | 59.1 | 50.7 | 54.0 | 68.1 | 59.3 | 77.0 | **C2 Meaning of "Stronger" Filtration** We in fact missed to give more details about that. Our initial filtration function only used atom symbols to construct regular PIs (to allow for a pure technical comparison with others), while this one 1. Consider **various types of information** - a weight filtration to express bond strength in the compounds. Single bond has weight 1, double bond has weight 2, triple bond has weight 3, and finally aromatic bond has weight 4 on the edges. - a sublevel filtration on partial atomic charges and - a sublevel filtration on atomic mass 2. For each of those three, we do not simply consider the given filtration, but **construct a 2D filtration by filtering**: in one dimension according to the above information and in the second dimension according to a VR filtration capturing the distances between atoms. (Fig. 3 in [1] illustrates this kind of multi-dimensional filtration) 3. Lastly, the **3 2D PIs are concatenated**, and we compute distances based on the combination of PIs **C3 What does a filtration function say to domain experts, and how can they choose the best filtration?** Filtration functions are techniques used in models, similar to how we use kernels in ML. The choice is based on the data and the application scenario. In SSL, we hypothesize, that more generally applicable filtrations are likely more effective. But this will have to be validated in practice, of course. In fact, the filtration from [1] captures such basic knowledge and showed good performance in both their work and our experiments, hence it provides a good starting point. Moreover, we believe the inclusion of 3D information might turn out to be helpful as well. **Example.** The BBBP dataset focuses on the assessment of compounds' blood-brain barrier (BBB) penetration, it is observed that polar-related descriptors tend to exhibit inverse correlations with BBB permeability [2]. And Partial atomic charges are a measure of the degree of electronegativity of an atom in a molecule and can indicate the polarity of atomic interactions. Therefore, partial atomic charges could potentially serve as filtering functions in this dataset. The ToDD filtration includes them, and our results show that the partial charges here may indeed change the picture. Since there is other information involved, we cannot draw direct conclusions; yet the increase on BBBP is much higher than on all other datasets (see .pdf). | | BBBP | | --------------------- | ---------- | | TAE | 67.5 (1.1) | | TAE (ToDD filtration) | 70.4 (0.8) | [1] ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery, NeurIPS 2022. [2] Jiang, Dejun, et al. "Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models." *Journal of cheminformatics* 13.1 (2021): 1-23. (edited the example 08/21)
Rebuttal 1: Rebuttal: **We thank all reviewers for the very fair, detailed, and constructive feedback!** We address all comments below and are happy to provide additional information if needed. --------------------------- **G1 Summary of Additional Experiments Suggested by Reviewers** - **Technically similar approach** ([1] suggested by U3kd). Performance well below GraphCL+TDL when it is pre-trained over the same data (~2M molecules rather than the ~10M considered in [1]), and TDL yields improvement. - **Standalone TDL** (9rP5 Q6). Though not recommended, achieves a competitive average ROC-AUC of 72.02, *without using views*. - **Recent AD-GCL** (U3kd Q2, XtrW W2). TDL increases average performance from 72.67 to 73.21. Our baselines+TDL often outperform AD-GCL in our additional experiments by large margins (e.g., linear probing). - **w/ Stronger Filtration, ToDD [6]** (9rP5 W1, XtrW W3, recC Q5). Convincing improvements, *across several CL baselines*: * Linear probing: average performance raised to >= 67 (vs. <= 64 w/o TDL), which is comparable to fine-tuned GIN * Fine-tuning: improvement between 1.3 and 2.0 for all models * Low-data fine-tuning: performance improvements of up to 10% **Please see the attached pdf.** We report some results and additional side experiments in the reviews. --------------------------- **G2 Comparison to SOTA** (U3kd Q2, XtrW W2) The SEGA paper appeared on arXiv on May 8 and the code is not yet available. We tried to run Mole-BERT before for quite some time but we did not succeed, and all issues on its repository have gotten deleted so far, without having been addressed. We now ran TDL on top of AD-GCL and see good improvement (see attachment). Moreover, the inclusion of AD-GCL into the other experiments nicely *highlights the complementary nature and potential of TDL: overall, it makes the baselines outperform AD-GCL, often by large margins*. --------------------------- **G3 Power and Complementary Nature of PH-based Embeddings, compared to regular GNN embeddings or ECFP** (9rP5 W1&W3, U3kd W3, XtrW W3, recC Q1&Q5) - **References.** In the submission we only mentioned the work about molecule representation learning using persistent homology which is closest to ours and omitted the majority of works in the area since the technical focus of our paper is on SSL, a specific setting with its own challenges. However, as suggested by Reviewer recC and as it is shown in the reviews, PH is less known in the ML community and *we can back our claim that it represents a proven and powerful method for representation learning over molecules according to domain scientists*. We will add more discussion about this to the appendix and hope that we thereby address 9rP5 W3 (partially), U3kd W3, and recC Q1. - **Power of Filtration Function.** In the submission, we focused on a most simple filtration function based on atom symbols, which does not use more domain knowledge than related works. This allows us showing that TDL adds a novel dimension of knowledge (i.e., the explicit distance regularization) even in this scenario. We also introduced the most simple filtration in the theory part, to convey the overall idea. *We did not intend to recommend a particular filtration, TDL is generic there*. To provide more support for our proposal of using PH (9rP5 W1, XtrW W3, recC Q5), *we report hopefully convincing results in the attachment using a stronger filtration* function including more domain knowledge, the one proposed in [6]. --------------------------- **G4 Relevance of Fine-Tuning Experiments** (9rP5 W1&W2, XtrW W1) Please note that "the experimental results" should not be equated with Table 4 alone. We do not consider the fine-tuning experiments to show our method's effectiveness best. TDL shines in other scenarios. Furthermore: - In drug discovery, downstream benchmarking results rarely translate into practice which is getting recognized more and more recently (e.g., see the ICML 2023 panel on the topic [1] or [2]). Hence, the *generally* minor model differences over Moleculenet should be interpreted with care. - SOTA works in SSL in other domains sometimes do not consider fine-tuning experiments at all [3, 4], since the data contains various forms of additional confounding factors (e.g., label distribution, dataset balance) which make it hard to get an insight into the actual influence of the SSL embeddings. Generally, fine-tuning experiments are considered as one possible label-based evaluation protocol, often only considered after linear probing etc. [5]. The Moleculenet benchmark used with scaffold split represents a challenging setting, which gives good estimates about model performance in certain downstream scenarios. However, we think that graph SSL should extend the experiment setting introduced by Hu et al. in 2020, which solely relies on fine-tuning, and evaluate more broadly. TDL *significantly* improves a range of well-known and more recent graph SSL approaches on Moleculenet - in linear probing and - in low-data experiments on subsets of the benchmark. Our additional results using a stronger filtration method even improve upon the numbers reported in the submission and also show that PH may lead to considerable increases in fine-tuning. --------------------------- [1] ICML 2023 Panel: Fostering the Development of Impactful AI Models in Drug Discovery. [2] Tossou et al. "Real-World Molecular Out-Of-Distribution: Specification and Investigation." ChemRxiv. [3] Devillers, et al. "EquiMod: An Equivariance Module to Improve Visual Instance Discrimination." ICLR 2023. [4] Ermolov, Aleksandr, et al. "Whitening for self-supervised representation learning." ICML 2021. [5] Balestriero, Randall, et al. "A cookbook of self-supervised learning." arXiv preprint arXiv:2304.12210 (2023). [6] ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery, NeurIPS 2022. Pdf: /pdf/cfe0b1654d7081e6f99a583453f577a799030999.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores self supervised learning in the context of molecular representation, specifically based on persistent homology. The paper proposes an autoencoder to demonstrate the general representational power of PH and a contrastive-learning-based loss that can be applied to existing SSL approaches. The proposed approach is evaluated for molecular property predictions, showing improved representations and predictive power compared to baselines across different tasks. The claim is that the new loss function enhances baseline performance particularly with small datasets. Strengths: The paper is well written and the idea is novel and interesting. Weaknesses: - Given the technical nature of PH, and its origin in the domain of topological data analysis, a more mathematical foundation of the methods in the paper would be desired. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Since PH naturally offers multiple data views, I wonder whether the learned representations can be more explainable, that is, not just focusing on atom based attribution but also having explanations of more global features/subgroups, for instance, along the lines of Bertolini et al. "Beyond Atoms and Bonds: Contextual Explainability via Molecular Graphical Depictions". Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors describe the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for the interesting comments!** The paper does not explicitly discuss the points mentioned in the review because these are challenging topics in themselves, but they nicely underline the future research potential and we are happy to discuss them further. **W1 More mathematical foundation would be desired.** We agree on that and definitely plan to investigate the learnt embeddings from a theoretical viewpoint, but we consider that out of the scope of this current, initial work on PH in the context of molecular SSL. It is not straightforward how to leverage the topological distances to obtain stability and generalization results, this deserves deeper study. A second topic which we address empirically is the increase in the rank of the embeddings. However, there we observe a strong dependence on the baselines, which will make the study more challenging. Moreover, this direction is only lately studied in SSL in general, with measurement methods presented as recently as at ICML 2023 [1]. We sincerely thank the reviewer for recognizing that this kind of studies is beyond the focus of our paper and for not letting it influence the score too negatively. **Q1 Explainability** We see at least two ways in which topological features may offer particular advantages. First, the filtrations can be designed based on arbitrary domain knowledge, and this knowledge may indeed be re-discovered in the embeddings if both the filtration and the embedding are chosen carefully. For instance, there are certain kinds of porous crystals called MOFs whose pore diameters can be captured by using a combination of VR filtration and PIs [2]; in a nutshell, the filtration iteratively "aggregates" the pore-delimiting atoms, a novel topological structure appears once the process is finished, and the length of the aggregation is recorded in the persistence diagram (this example uses 2D topological voids and hence goes beyond the graph homology we introduce, but we think it is very illustrative). Note that for this kind of analysis the model has to be encouraged to learn the underlying topological fingerprint, e.g., in the way TAE learns PIs. Our CL-based approach uses the fingerprints in a different way, to retain the overall structure of the PH-based embedding space, in terms of distances. This offers a different angle of explainability, reflecting the relations/similarity between molecules. We believe that this is a unique feature of PH-based fingerprints and particularly interesting for SSL in that it directly supports the main goal of pre-training, learning a comprehensive embedding space. Also model calibration (important for interpreting the results) is critically relying on distance awareness [3]. ----------------------------------------- [1] Garrido, et al. "Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank." ICML, 2023. [2] Krishnapriyan, et al. "Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metal‑organic frameworks" Scientific reports 11.1 (2021): 8888. [3] Liu, et al. "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness." NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Dear reviewer, Thanks for supporting the review process. Please briefly acknowledge the rebuttal by the authors and ask for additional clarifications if required. Best,\ Your AC
null
null
null
null
null
null
Ambient Diffusion: Learning Clean Distributions from Corrupted Data
Accept (poster)
Summary: This paper proposes to train diffusion models that can recover corrupted data without training on clean data. The key idea is, given a corruption matrix $A$ one can further sample a corruption matrix $\tilde{A}$ given $A$ and the model learns to predict all the existing pixels. It is empirically shown that this trick ensures robustness against higher corruption levels and can restore data with better performance than those that's trained on clean data. Strengths: - The authors propose the first diffusion-based method that can restore data from corruption without training from clean data. - The method can reuse regular diffusion sampler without much modification. - It is guaranteed theoretically for the model to recover clean data with some rank assumptions. - The method alleviates the memorization issues of diffusion models by considering corruption schemes. Weaknesses: - More exposition is needed on some details. What kind of $\tilde{A}$ is used during inference? Is it necessary to sample using the same $\delta$ during training? Do we fix a $\tilde{A}$ for all sampling steps or resample each step? Does taking the expectation over multiple samples of such $\tilde{A}$ work better? - The paper lacks implementation details. E.g. $h_\theta(\tilde{A}, \tilde{A}x_t, t)$ explicitly depends on $\tilde{A}$. How is this dependency implemented in practice? - Line 248 the authors forgot line break. - The authors only considered random masking corruption. However, there are many other types of linear corruption schemes. Does the method stay robust against other corruption such as Gaussian blur, etc.? - It would be great to analyze the effect of different sampling scheme for $\tilde{A}$. How does FID change w.r.t $\tilde{A}$ with increasing levels of further corruption? Is there a sweet spot for further corruption? - Why is there no comparison with AmbientGAN for Table 1, as it is an important baseline? Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions and concerns are listed in the section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors have adequately discussed limitations of the model. Some further discussion on societal impact is encouraged as it is closely related to privacy issues of training data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their positive and constructive feedback. We are glad that the Reviewer appreciated many aspects of our work, including the novelty of the method, the theoretical analysis, and the implications of reducing memorization. > More exposition is needed on some details. What kind of $\tilde{A}$ is used during inference? Is it necessary to sample using the same $\delta$ during training? Do we fix a $\tilde{A}$ for all sampling steps or resample each step? Does taking the expectation over multiple samples of such $\tilde{A}$ work better? We use exactly the same process for generating $\tilde{A}$ during inference. We first sample $A$ from the corruption distribution and $B$ with the same $\delta$ and then set $\tilde{A} = BA$. We cannot use a much different value of $\delta$ during the inference because during the training, the network is trained to take input as the images with missing entries corresponding to $\tilde{A}$. Regarding fixing or resampling the matrix A, in all of our experiments, we use a fixed $\tilde{A}$ for all sampling steps. We tried both approaches and found that the first approach works better. We also tried fixing multiple $\tilde{A}$ for all sampling steps and using an average of $\mathbb E[x_0 | \tilde{A} x_t]$ as an approximation of $\mathbb E[x_0 | x_t]$ and found that this sampler generates blurry images. We will clarify these points in the paper. > The paper lacks implementation details. E.g. explicitly depends on $h_{\theta}( \tilde{A}x_t, \tilde{A}, t )$. How is this dependency implemented in practice? We thank the Reviewer for raising this important clarification question: We concatenate the image with the mask along the channel dimension (e.g. the mask is an extra channel). We will make this clear in the paper. > The authors only considered random masking corruption. However, there are many other types of linear corruption schemes. Does the method stay robust against other corruption such as Gaussian blur, etc.? We have only experimented with masking (pixels and blocks), but we do not see any reason why this method wouldn't work for other corruptions, as long as the assumptions needed for our theory are satisfied. We are currently in the process of setting up some non-masking experiments and we will try to include them in the Appendix for the camera-ready version. > It would be great to analyze the effect of different sampling scheme for $\tilde A$. How does FID change w.r.t with increasing levels of further corruption? Is there a sweet spot? We thank the Reviewer a lot for raising this question. For a fixed corruption $p$, as we increase $\delta$, we increase the number of missing pixels in the inputs of the network during training. Intuitively, we want to keep $\delta$ as low as possible since we do not want to add additional corruption that is not present in our data. Since the network is trained to predict $\mathbb E[x_0 | \tilde A x_t, \tilde A]$, as $\delta$ goes to $1$, $\tilde A$ becomes the zero matrix and the network only learns the mean of the data distribution. On the other hand, as $\delta \to 0$, the network is not penalized for mistakes in the pixels that are not observed and hence can make arbitrarily wrong predictions in the missing pixels. Empirically, we want to set $\delta$ to the smallest possible value for which the network learns to predict correctly all the missing pixels. As the Reviewer suggested, there is a sweet spot! To illustrate this, we do a small ablation for the value of $\delta$ for random inpainting in CIFAR-10: $p$ | $\delta$ | Inception Score ------------------------- | --------- | ------------------ 0.4 | 0.0 | 6.70 (from Figure 5) 0.4 | 0.1 | 7.45 (from Figure 6) 0.4 | 0.4 | 6.95 (experiment added for the rebuttal) As seen, setting $\delta$ to $0$ leads to poor performance: the network does not learn to predict the missing values. Setting a value of $\delta$ to $0.4$ makes sure that the network learns to make accurate predictions everywhere, but now the inputs are more corrupted (only 36% of the pixels survive on average), and hence the predictions are more coarse. Remarkably, the performance of the model with hyperparameters $p=0.4, \delta=0.4$ is only marginally better than the performance for $p=0.6, \delta=0.4$ which has an Inception Score of $6.88$. This is because even though in the former case we have less corruption in our dataset, the effective corruption in the inputs of both models is the same, i.e. only $36%$ of the pixels survive. The optimal value of $\delta$ depends on the dataset and more importantly the resolution – in higher resolution datasets there is more redundancy and we can afford to be more aggressive in the extra corruption. One could try to find the exact value that maximizes performance for a given $p$ (binary search), but due to the high-computational cost of the training, we choose to not ablate this further. > Why is there no comparison with AmbientGAN for Table 1, as it is an important baseline? It would be hard, since we would need to train AmbientGANs for these datasets, plus it would be much worse than the baselines we compare against. To explain why: AmbientGAN is a framework to train GANs with missing data. Table 1 compares inpainting, i.e. how well we restore a given image with missing pixels, using a pre-trained generator. GANs can be used as priors for image inpainting, but no pre-trained AmbientGANs are available for these large datasets. Also, diffusions perform better compared to GANs for inpainting and other inverse problems. Therefore, beating state-of-the-art pre-trained diffusion models (trained on clean data!) used for inpainting, is a harder task than beating AmbientGAN-guided inpaining. Table 1 shows that we can match the performance or even outperform these stronger baselines even without having access to clean data. We hope this clarifies things and we plan to include this discussion to make it more clear in the paper. --- Rebuttal Comment 1.1: Comment: As the discussion period approaches its end, we would like to kindly ask if the Reviewer had a chance to read our rebuttal. We thank again the Reviewer for their time and their constructive feedback in their initial review and we hope that our rebuttal addressed the Reviewer's questions.
Summary: In summary, the authors propose a diffusion-based framework that can learn unknown distributions from highly-corrupted samples, allowing the training of generative models without relying on clean training data. Their approach introduces additional measurement distortion and successfully predicts original corrupted images. The method is applicable to various corruption processes and achieves promising results on benchmark datasets. Strengths: - The problem formulation by itself is interesting, and the proposed method is novel. - The paper proposes a new and interesting domain of learning the image data distribution w/o access to the ground truth data samples. - The paper is well written and easy to follow. - The proposed training and sampling procedure is scalable and easy to incorporate into the current diffusion model framework. - The authors attach the theory for the effectiveness of the proposed method. - The proposed method only needs 1 NFE to produce comparable results. - The method paves a way to alleviate the memorize issue of diffusion model. Weaknesses: - From my perspective, I do not see significant weaknesses in this paper. - One potential issue is the significant drop of FID when the model is trained with images corrupted with a large ratio of pixels. However, I think this should not be criticized. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you provide the performance of the proposed method in Table 1 with more timesteps of sampling? Does the performance improve? If no, then can you please explain why? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author lists the limitation of the proposed method in the paper which is reasonable. I appreciate the authors' adequate limitation summary and it sheds light on future improvements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very glad that the Reviewer appreciated the importance of the problem, the novelty, the presentation, and the theoretical and practical implications of our work! > Can you provide the performance of the proposed method in Table 1 with more timesteps of sampling? Does the performance improve? If no, then can you please explain why? We thank the Reviewer for the great question. We achieve optimal reconstruction in one step because we learn the conditional expectation (as predicted by our theory) which is the best reconstruction under the $l_2$ loss. We thank again the Reviewer for their positive feedback and we would be happy to take further questions, if any. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your answer. I will keep the current score.
Summary: This paper describes a method to learn a denoising diffusion model only with corrupted data. This is an important problem in many areas of applied science where there is no access to ground truth. Another important potential benefit of this method is to overcome memorization of the training images. The main idea in this method is described as introducing additional corruption and training on this doubly corrupt data. The authors describe a derivation to estimate conditional expectation of the uncorrupted image only using the corrupted data. They show (Table 1) that their method outperforms other methods in solving random inpainting problem. Also, the method is used to fine tune Deepfloyd IF on smaller samples. This result is used to show that the method overcomes memorization. Strengths: This work is motivated in two ways: learning the score of distribution of clean data with access to corrupted data, and avoiding memorization. These are both very important practical and theoretical topics which this work tries to address. The trick which is used to train the models (adding more corruption) is clever and the network achieves to learn to inpaint and denoise images (figure 7). As mentioned under weaknesses, I think the first goal is not fully achieved and the theoretical results seem incorrect and practical results seem to be constrained to specific types of corruption. However, the memorization results seem very impressive, although the memorization analysis is carries only for the fine tuned model. Weaknesses: There are a number of fundamental technical and conceptual flaws which need to be considered and fixed: 1) The training setup was not clear in the text, but this is what I gathered: the target image during training is a corrupted image $Ax_0$, and the input is a noisy and more corrupted image $BAx_0 + \sigma \eta $. All throughout the corruptions are assumed to be random or block missing pixels. The network learns to remove the noise and inpaint pixels that are removed by B. At the test time, the network removes noise and also inpaint *all* pixels since it doesn't know which pixels are dropped due to A and which pixels are dropped due to B. If this is the training setup, please clarify in the text. If not, please describe what was the training setup. 2) Equation (3), the objective after additional corruption, is the distance between the doubly corrupted image and the $\textbf{clean}, x_0,$ image. The entire premise of the work is that clean image is not available so why is it assumed to be available during training? The objective should be the distance from $Ax_0$ instead. It needs to be clarified whether this is a typo or the authors actually used clean data, $x_0$, during training. 2) If that is a typo and the target image during training is $Ax_0$ then after training is completed the network works as a denoiser plus inpainter. As a result, the Tweedie equation is not a good description of the output of this network anymore. That is, the output is not $E[x_0|x_t]$, where $x_t = x_0 + \sigma_t \eta$. So, this output cannot be used directly as an estimate of score in the diffusion model . 3) To remedy the above mentioned problem, the authors propose eq 3.3 in which they claim to approximate the score, $E[x_0|x_t],$ with $E[x_0|\tilde{A}x_t, \tilde{A}]$. It is not at all clear what is the justification behind this approximation. Again, this expectation is a very different entity from the actual score, and it is not clear why direct use of it makes sense to estimate the score. This expectation is the solution for inverse problem given the forward measurement $A$, as apposed to the solution for mere denoising problem (i.e. the score). 4) Additionally, two more terms are added to the update line in eq 3.4. The justification for this is described from line 166 to 175. It starts with $\gamma_t$ going to zero when $t$ approaches zero. It is not clear as to why $\gamma_t$ goes to zero when $t$ goes to zero. Please clarify this assumption. The reasoning that follows this assumption is also not clear and sounds ad hoc. Did you add these terms because the update line of eq 3.3 did not work in practice? What is the intuition or theory behind this choice? Please clarify both eq 3.3. (why did you estimate one expectation with another?) and eq 3.4 (why did you add two terms and why do they make sense?). 5) The theory section needs at least a re-write because it is not clear what is the goal of this section. The section starts with the goal of proving that the optimal estimate of clean image given the corrupted image is equal to $E[x_0| Ax_t = y, A]$. It is not clear why the authors need to prove this, since this is a basic fact from Bayesian machine learning: the optimal estimation of corrupted data is the conditional mean of the predictive distribution (refer to textbooks like Bishop). The problem is that this expectation is not equal to the score, so theoretically it can't be used to estimate the score. Of course what is learned by this network is approximately the optimal reconstruction of $A(x+\sigma \eta)$, but as long as it is not the score, it is not theoretically valid to use it iteratively in a diffusion model. In a nutshell, the learned network is a denoiser/inpainter not a score estimator (i.e. pure denoiser). 6) On top of the above mentioned issues, this network learns to reconstruct + denoise images only if $E_{A|\tilde{A}}[A^TA]$ is full rank. This strong assumption requires a high level of randomness in the corruption which is not very common in many real-world applications. For example, if the ground truth images miss some information systematically (let's say they are all blurred) the network will not be able to reconstruct. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In addition to theoretical questions and comments I made under weaknesses: Why do you think you achieve optimal sampling performance in one iteration? How does the training time compares to uncorrupted models? Do you need to train on more samples? ________________________________________________________ Note: I am willing to raise the score if the questions asked under Weaknesses and here are addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The main limitation of the work (aside from some jumps and ambiguity in the theoretical results) is the type of corruption this method can work with. The corruption must be linear and in addition $E_{A|\tilde{A}}[A^TA]$ must be full rank. This rules out many real world applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their feedback! > Equation (3), [...] is the distance between the doubly corrupted image and the clean image. The objective should be the distance from $Ax_0$. It needs to be clarified whether this is a typo or the authors actually used clean data. The Reviewer has misread Equation 3. *There is a parenthesis outside*, A multiplies both the output of the network and the clean image. Hence, the only thing needed to train is the corrupted image Ax and not the clean image itself. **We do not use clean data during training.** > The theory section needs at least a re-write because it is not clear what is the goal of this section. [...] It is not clear why the authors need to prove this, since this is a basic fact from Bayesian machine learning [...]. The Reviewer may have a misunderstanding regarding our theoretical results, potentially because of misreading Equation 3. Proving that the minimizer of $\mathbb E_{x_0,Ax_t}[ || f(Ax_t) - x_0||^2]$ is the conditional expectation is a textbook argument. However, this is *not* what we prove. We prove something much stronger which is that there is an objective function (Eq. 3.2) that doesn’t need access to the clean images $x_0$ and still has as a minimizer the conditional expectation (Theorem 4.1). Our proof does build on the standard techniques but it has the key benefit of making clear what condition we need on the distribution of $A, \tilde{A}$ for the theorem to hold. Thus we argue that our result is novel, and we think that the Reviewer may not have appreciated it fully. We kindly ask the Reviewer to reconsider their evaluation, keeping in mind that our loss does not require clean images and that the proof technique has some differences compared to the standard setting. We would be happy to take further feedback if there are additional concerns. > If this is the training setup, please clarify in the text. If not, please describe what was the training setup. The Reviewer’s understanding of the training setup is correct. We will clarify this further in the main text. > [...] This expectation is a very different entity from the actual score, and it is not clear why direct use of it makes sense to estimate the score. It is true that we are making an approximation there, as we acknowledge in the paper. We approximate $\mathbb E[x_0 | x_t]$ with $\mathbb E[x0 | Axt, A]$. Intuitively, if A is not dropping a lot of information, these two conditional expectations are close. This approximation becomes worse for high corruption. This is a limitation of this work. In Lemma A.3. we prove that access to $\mathbb E[x_0 | Ax_t, A]$ is enough to reconstruct the true distribution whenever it is possible to reconstruct it from measurements. However, for the time being, we do not have an efficient algorithm to do so and thus we resort to this approximation. > two more terms are added to the update line in eq 3.4. [...] It is not clear as to why $\gamma_t$ goes to zero when t goes to zero. [...] Did you add these terms because the update line of eq 3.3 did not work in practice? Please clarify both eq 3.3. and eq 3.4. We think that the Reviewer has a misconception here. The $\gamma_t$ appears already from Equation 3.3 and it is not something that we added, it is the term that appears in all diffusion models when you discretize the reverse SDE/ODE. It always goes to zero and it stems from the fact that the magnitude of the noise is an increasing function of time and at the limit $t\ to 0$ we do not have any noise. The only thing difference to the classic ODE discretization is that we have $\mathbb E[x_0 | Ax_t, A]$ instead of $\mathbb E[x_0 | x_t]$. The Reconstruction Guidance sampler adds an additional gradient update that refines all pixels throughout the denoising process. The Fixed Mask sampler uses a fixed mask $A$ and thus the masked pixels are not getting refined at every iteration. Hence, for high corruption, these pixels get more “averaged” values. The reconstruction guidance sampler mitigates this. We ablate this Sampler in Table 5 in the Appendix and we see modest improvements that were not worth the extra computational cost hence we did not use this sampler in any experiment in the main paper. We hope this clarifies things. > This network learns to reconstruct + denoise images only if $E_{A|\tilde A}[A^TA]$ is full-rank. This strong assumption requires a high level of randomness in the corruption which is not very common in many real-world applications. Without this assumption, it is *impossible* to reconstruct the distribution from corrupted samples without making assumptions on the distribution to be reconstructed. This is not a limitation of our method; any method that tries to recover the distribution from corrupted samples would fail. The reason is that it is impossible to distinguish between two clean distributions that become identical after the pushfoward function. E.g., in the blurring example, depending on what is the blurring kernel, there could be many distributions that lead to the same blurred measurements. > Why do you think you achieve optimal sampling performance in one iteration? We achieve optimal reconstruction in one step because we learn the conditional expectation which is the best reconstruction under L2 loss. > How does the training time compares to uncorrupted models? Do you need to train on more samples? We thank the Reviewer for this great question! The training time and the dataset size are the same as for the training of the uncorrupted models. All models are trained for 200K steps (following the EDM paper) and we use the full dataset. For the finetuning experiments, we show that we can even finetune with 300 images or less. We refer to the Appendix, Section C, for the full training details. We hope that the Reviewer's comments are addressed and the Reviewer will consider increasing their score as they noted in their review. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my comments and questions. The comment about eq 3.2 was indeed due to my misreading of it. Thanks for clarification. In my initial review, as I had mentioned, I had assumed the apparent missing A was a typo and not a conceptual error. So the rest of my initial review was based on the assumption that you had the correct form of the eq 3.2 (as you did). Regarding the theory section, I am convinced now that Theorem 4.1 is valuable beyond what I was evaluated initially. However, it relies on the strong assumption on full rank $E[A^TA]$, so a discussion on this assumption should be included in the text. This assumption excludes many real work corruptions. It should be mentioned earlier in the text (in the intro or even abstract). At its current state, the text makes the impression that this method magically learns the score from any corrupted data, until section 4. It can be mentioned earlier that this method works as long as the corruption is not systematic. For more discussion, refer to literature on Stein Risk Estimator (SURE) in which the minimizer is estimated without access to clean data for the case of Gaussian corruption. Your proof can be thought of as a generalization of this old idea. It is very interesting that the corruption does not require training on more data points. I think it is worth to include this in the main text. A minor note: do not assume that all readers are familiar with the notations you use. Explain in the text what $\gamma$ is. The paper should be self-sufficient in terms of notations and definitions. Regarding the final update line, eq 3.3 and eq 3.4, I still find the discussion unsatisfactory. This is at the core of the method, and I am not convinced why this crude approximation (in 3.3) is acceptable. Of course if the corruption is small, the approximation would be not too far, but that is not what this paper claims to do. Additionally the term added in 3.4 is not well motivated. Again, if there is a good motivation from the paper you're borrowing this from, it should be described and explained in the text. Why is that term a good addition to your update line? At it's current state, there is no clear connection between the theory section and this added term. I raise my score from 4 to 5 since the authors addressed some of my questions. There is still room for improvement regarding the above point (point 3 and 4 in my initial review). --- Reply to Comment 1.1.1: Title: Additional Discussion Comment: We thank the Reviewer for engaging in the discussion and for raising their score. Their valuable time is deeply appreciated! We are glad that the issue that arose from the misreading has now been resolved, that the Reviewer appreciated our Theorem 4.1 and that the Reviewer found satisfying that we didn’t use more data points for the corrupted models. We will definitely include this in the main text, as recommended by the Reviewer. We will also make it more clear from the Introduction that our method does not work for arbitrary corruptions. In fact, there is no method that learns the true distribution for arbitrary corruptions. The assumption we make on the corruption process is necessary to learn from inpainted data – if we never observe certain pixel locations there is no way to identify the distribution in these locations unless we make assumptions on the distribution itself. As we pointed out in our rebuttal, previous methods such as AmbientGAN, make such assumptions (e.g. from AmbientGAN: “A critical assumption for our framework and theory to work is that the measurement process is known and satisfies certain technical conditions”). These assumptions are not very restrictive when we control the corruption process (e.g. to reduce memorization) but preclude (theoretically) applying our method for non-systematic corruption processes, as the Reviewer noted correctly. We will clarify this very early in the text, as recommended by the Reviewer. We also want to point out that our method could potentially achieve reasonable performance in practice for corruption processes that violate such assumptions, but in this case it does not come with any theoretical guarantees. We will also further highlight how our proof adds to the literature of Stein Risk Estimation (SURE) and generalizes this to the case of non Gaussian corruption. We thank the Reviewer for recommending this. Finally, we will add the explanation from our rebuttal to the main text regarding what $\gamma_t$ is to avoid potential confusions. Regarding the remaining concern of the Reviewer about the sampling, we want to start by acknowledging again that we are making an approximation there. We clearly state this in the Limitations Section of our work (“Further, in this work we only experimented with very simple approximation algorithms to estimate $\mathbb E[x_0|x_t]$ using our trained models”) and in the Method Section (“we approximate $\mathbb E[x_0|x_t]$ given the predictions of $\mathbb E[x_0|Ax_t, A]$”). Surprisingly, this simple approximation outperforms previous baselines and works reasonably well even for high corruptions (potentially because there a lot of redundancies in natural images). We are planning to explore in future work how to make this step exact and we might get a significant performance boost (especially in the high corruption regime) by fixing this. Regarding the term in the Equation 3.4, this is inspired by the video diffusion models literature. For video generation with diffusion models, the models are trained to denoise the current frame but are regularized so that denoised version is not too different compared to the prediction for neighboring frames. This heuristic is needed for smooth transitions in the video generation process. Similarly, in our setup, we only have access to models that predict given limited context and we want to combine the predictions such that there are no artifacts or inconsistencies. The fixed mask sampler ensures there are no artifacts by always masking the same pixels. In Eq. 3.4, we added this extra term to (heuristically) remedy that the values in the masked pixels in the fixed mask sampler only depended on the final prediction – we update all pixels at every prediction using many different masks and ensure consistency of the different predictions through this added term. Again, this is a heuristic step and future work might significantly improve performance by making this exact, thus we understand if the Reviewer finds this approximation unsatisfactory. We did not end up using this sampler for the experiments in the main paper due to the extra computation needed in return for marginal benefits, e.g. see Appendix E.3 and Table 5. Since the Reviewer did not appreciate this attempt to heuristically improve the sampling process, we can move this entirely to the Appendix (together with the discussion above) and make space for the rest of the requested changes that are more critical to the paper. We hope this discussion is useful and we want to thank again the Reviewer for helping us improve our paper!
Summary: The paper focuses on learning clean distributions from corrupted data. In the training diffusion model, the training dataset contains only highly-corrupted examples. They propose a training algorithm of restoration model by introducing additional measurement distortion.They also provide sampling methods and theoretical analysis. Experimental results show that their superior performance. Strengths: * The problem of handling corrupted datasets is significant even in generative model learning. This paper could be seen as one that explores this directions. * The paper is well-structured and easy to follow. * The fact that a model which is trained by corrupted data does not memorize the training dataset, as supported by Figure 1 and Figure 4, is very important. This might be gives significant implications for various applications. Weaknesses: * Sampling Process * It would be helpful to provide an algorithm or a detailed explanation of the sampling process from scratch. * Equations 3.3 and 3.4 describe how the sample at time $t$ is generated from sample at $t-\Delta t$. It is necessary to explain how the evaluation of single networks allows the generation of images in the experiments. * Related work * It would be beneficial to include a discussion of previous research on dealing with incomplete datasets in generative models, such as [1, 2, 3, 4]. [1] Li, S. C. X., Jiang, B., & Marlin, B. (2018, September). MisGAN: Learning from Incomplete Data with Generative Adversarial Networks. In International Conference on Learning Representations. [2] Mattei, P. A., & Frellsen, J. (2019, May). MIWAE: Deep generative modelling and imputation of incomplete data sets. In International Conference on Machine Learning (pp. 4413-4423). PMLR. [3] Ipsen, N. B., Mattei, P. A., & Frellsen, J. (2020, September). not-MIWAE: Deep Generative Modelling with Missing not at Random Data. In International Conference on Learning Representations. [4] Richardson, T. W., Wu, W., Lin, L., Xu, B., & Bernal, E. A. (2020). Mcflow: Monte carlo flow models for data imputation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14205-14214). * Experiments * It would be beneficial to provide the FID results for Figure 5 to demonstrate the differences. * In addition to Figure 5, it would be valuable to compare the proposed model with existing generative models such as AmbientGAN in various experiments. * Presentation * It would be helpful to have better spacing between subfigures in Figure 2 to improve the clarity of the captions. * Proper citations are needed in the background section. * Figure 3 would be better placed within a paragraph rather than between paragraphs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see Weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: They provided in the last paragraph in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the constructive feedback! We are glad that the Reviewer appreciated the novelty, the presentation and the implications our work could have in various applications related to memorization. > It would be helpful to provide an algorithm or a detailed explanation of the sampling process from scratch. We agree with the Reviewer. Please see the detailed sampling algorithm in the one page PDF accompanying this rebuttal. We will include this algorithm in the next revision of our work. > it is necessary to explain how the evaluation of single networks allows the generation of images in the experiments. This becomes clear with the sampling algorithm we include in our rebuttal. This works by a process of iterative restoration-degradation, similar to how all diffusion models operate. > It would be beneficial to include a discussion of previous research on dealing with incomplete datasets in generative models, such as [1, 2, 3, 4]. We thank the Reviewer for bringing this relevant work to our attention! We will definitely add these references in the camera-ready version of our work. All these works propose ways to learn other classes of generative models (such as GANs, Normalizing Flows and VAEs) from missing data. The MisGAN work generalizes AmbientGAN in the case where the measurement operator is unknown. Specifically, the authors propose an additional generator that learns to model the corruption mechanism with adversarial training. MCFlow is a framework, based on a variant of the EM algorithm, that can be used to train normalizing flow models from missing data. Finally, MIWAE and Not-MIWAE are frameworks to learn deep latent models (e.g. VAEs) from missing data when the corruption process is known or unknown respectively. Our work provides a diffusion-based framework for the missing data problem and thus expands this interesting prior work. > It would be beneficial to provide the FID results for Figure 5 to demonstrate the differences. Figure 5 shows the Inception results for CIFAR-10. The FID results for the same dataset are shown in Figure 6. Unfortunately, the authors of AmbientGAN do not report FID scores. > In addition to Figure 5, it would be valuable to compare the proposed model with existing generative models such as AmbientGAN in various experiments. We thank the Reviewer for the suggestion. AmbientGAN only provides quantitative results in MNIST and CIFAR-10 (Figures 7 and 8 in the AmbientGAN paper). We have the comparison with CIFAR-10 (Figure 5) in the paper. We did not experiment on the MNIST dataset since it is a rather toy problem for image generation. The Reviewer brought to our attention the follow-up work to the AmbientGAN paper, MisGAN. The authors of MisGAN report FID scores for different erasure probabilities for CIFAR-10 and CelebA. We will include these comparisons in the next version of our paper. A short comparison is provided below: **CelebA**: Corruption Probability | Method | FID | --------------------------| --------- | ---- | 0.6 | MisGAN | 37.42 0.6 | Ambient Diffusion | **6.08** 0.8 | MisGAN | 100.0 0.8 | Ambient Diffusion | **11.19** 0.9 | MisGAN | 141.11 0.9 | Ambient Diffusion | **25.53** **CIFAR-10**: Corruption Probability | Method | FID | --------------------------| --------- | ---- | 0.4 | MisGAN | 18.95 0.4 | Ambient Diffusion | **18.85** 0.6 | MisGAN | 49.30 0.6 | Ambient Diffusion | **28.88** 0.8 | MisGAN | 111.50 0.8 | Ambient Diffusion | **46.27** We will include this comparison in the camera-ready version of our work. It can also be found in the one page PDF accompanying this rebuttal. We emphasize that MIsGAN is solving a harder problem than we are, since for MisGAN the corruption operator is not known and needs to be inferred-- we will explain this in the paper. We finally want to thank the reviewer for the comments in the presentation of our work. We will make sure to add the additional citations, fix the spacing and improve the placement of Figure 3, as suggested. --- Rebuttal Comment 1.1: Comment: Thank you for your response and further comparison. My concerns are mostly addressed, so I'm raising my rating from 5 to 6.
Rebuttal 1: Rebuttal: We thank the Reviewers for their constructive feedback! We are very glad that our work was well-received and that the novelty, the experimental and the theoretical contributions were generally appreciated by the Reviewers. We include separate replies to each one of the Reviewers. We also attach a one-page PDF that contains additional experiments and a formal statement of our sampling algorithm, as requested by some of the Reviewers. We remain available to answer additional questions if any! Pdf: /pdf/14564b6ea108139751040ab23bcea0c48fa728ae.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Fast and Provable Algorithm for Sparse Phase Retrieval
Reject
Summary: The authors introduce a novel second-order method for sparse phase retrieval. Compared to previous algorithms, it exhibits faster convergence and better recovery. The method leverages sparsity to reduce the size of the linear system that needs to be solved at each iteration in order to determine the approximate Newton direction (reduced from n^3 to s^3), and a second-order approximation of the intensity-based objective. Strengths: This paper presents strong results and a theoretical analysis of the algorithm in both the noisy and noise-free case. Weaknesses: The sample complexity required for initialization and refinement is sub-optimal. The experiments are only on toy data. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What are the practical implications of the sub-optimal sample complexity required for initializing the algorithm and for the refinement stage? Does it limit the applicability of the method on real-world signals? Would the authors be able to show experimental comparisons on real-world phase recovery examples? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discuss the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. What are the practical implications of the sub-optimal sample complexity required for initializing the algorithm and for the refinement stage? Does it limit the applicability of the method on real-world signals? **Reply:** Thank you for your insightful questions. The sub-optimal sample complexity required for the refinement stage, as revealed by our extensive experiments, does not adversely impact the practical performance of our algorithm. Indeed, our method demonstrated successful recovery with fewer measurements in numerous numerical experiments. This sub-optimality primarily arises in our theoretical analysis when dealing with the Hessian (please refer to Lemma B.5 for more details). The sub-optimal sample complexity required for the initialization stage remains an open problem. In the context of sparse phase retrieval, we anticipate a linear dependence of the sample complexity on $s$, whereas our current result shows a quadratic dependence, $\mathcal{O} (s^2 \log n)$. The conditions leading to a linear dependence on $s$ are not yet clear. For example, considering $s = n$ (which reduces the problem to phase retrieval), a linear dependence on $s$ has already been established. An important open question is the lower bound on $s$ that would ensure a linear dependence of the sample complexity. As to whether this sub-optimality in the initialization stage limits the real-world applicability of our method, the answer is still unclear. Various initialization algorithms have been designed, but the gap of sub-optimality remains. We would like to draw attention to a recent study [R1] which proposes a new initialization method, reducing the sample complexity from $\mathcal{O} (s^2 \log n)$ to $\mathcal{O} (s \bar{s} \log n)$, where $\bar{s}$ represents the stable sparsity of the underlying signal. However, this study does not entirely solve the problem. In light of your valuable comments, we will include a comprehensive discussion on these topics in our revised manuscript. Your insightful queries will undoubtedly contribute to the thoroughness of our paper. [R1] J.-F. Cai, J. Li, and J. You, Provable Sample-Efficient Sparse Phase Retrieval Initialized by Truncated Power Method, Inverse Problems, 39(7):075008, 2023. > 2. Would the authors be able to show experimental comparisons on real-world phase recovery examples? **Reply:** Thank you for your comment. Our research primarily focuses on the theoretical and algorithmic foundations of sparse phase retrieval problems. The reason for this focus is that we believe a robust theoretical underpinning is crucial for developing reliable and efficient algorithms, which can then be used across a wide range of applications. As for experimental comparisons on real-world phase recovery examples, we acknowledge the importance of such experiments. However, establishing a real experimental system for phase recovery is not trivial and is outside the scope of our current study. The design and implementation of such an experimental system would require significant resources and expertise in specific application domains, which our team does not currently possess. We hope that our theoretical contributions will provide a basis for future research, and we look forward to seeing how our results can be applied and validated in real-world settings. --- Rebuttal Comment 1.1: Comment: I thank the authors for their insightful answers to my questions, and for their promise to enrich the manuscript with the discussion on above. I believe this work is an important contribution to the problem of phase retrieval and agree with reviewer CNCT`: this has the potential to become a foundational paper in the field. I also disagree with reviewer kapg's comment regarding this being of interest to few people at the conference. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your insightful comments and the time you've dedicated to reviewing our work. Your recognition of our research's potential impact in the field of phase retrieval is profoundly encouraging. Guided by your valuable recommendations, we are committed to enhancing our manuscript by incorporating the discussed points. Once again, we convey our sincerest gratitude for your invaluable feedback and constructive guidance.
Summary: The authors propose a second-order algorithm based in Newton projection for the sparse phase retrieval algorithm. The proposed algorithm is similar to Hard Thresholding Pursuit, where the free variables (i.e. the support) is first identified by a hard thresholding step, followed by an update on the free variables via a Newton projection step. As is standard for approaches to phase retrieval, the proposed method first performs an initialisation stage to ensure that the initial guess is sufficiently close to the true signal, then applies the proposed second-order method to obtain global convergence. There is are theoretical results proving quadratic convergence for the proposed method. Strengths: The performance show substantial gains compared to previous methods, moreover, it establishes a quadratic convergence rate. Weaknesses: This work is incremental compared to HTP of [28]. HTP can already to be interpreted as a second order method. In terms of per-iteration complexity, the proposed method is the same as HTP. The comparison in ‘iteration complexity’ is somewhat unclear because the complexity given in HTP is for exact recovery, whereas the rate given in Table 1 for the proposed method is to obtain accuracy \epsilon — is it just that [28] does not prove a quadratic rate, or do we expect [28] to have worse convergence behaviour in general? Moreover, [28] proves finite convergence for their method, does the proposed method also achieve finite convergence? In terms of practical performance, the convergence plots show that the proposed method has faster convergence compared to HTP, but the performance for HTP here is worse than the performance reported in [28]. Perhaps it would be useful to replicate the exact experiments in [28] so that a clear comparison can be given? In general, it would be useful to have a discussion on the differences with HTP and an explanation as to why the performance is superior to HTP, given that both are second-order methods. I also had a look at the proof and it is again similar to the proof given in [28], so it would be useful again to have a discussion on the differences and novelty over [28]. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please clarify on the differences in convergence results between [28] and the proposed method. It is also unclear to me why the proposed method has superior performance both in terms of convergence and in terms of the sparse solutions recovered, when both are second-order methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. Please clarify on the differences in convergence results between [28] and the proposed method. **Reply:** We appreciate your constructive suggestion. In response to your query about the differences in convergence results between our method and the one presented in [28], we provide the following clarifications. In [28], the authors demonstrated that Hard Thresholding Pursuit (HTP) converges to the exact solution within a finite number of steps, specifically, $\mathcal{O} (\log ( s^2 \log n) + \log ( \Vert x^\natural \Vert / x_{\min}^\natural) )$. On the other hand, the convergence rate of our proposed algorithm is $\mathcal{O}(\log (\log (1 / \epsilon) +\log ( \Vert x^\natural \Vert / x_{\min}^\natural ) ))$. We acknowledge that a direct comparison between these two convergence results is not straightforward. For a more tangible comparison, we could refer to the success criterion used in our research: $\Vert x - x^\natural \Vert / \Vert x^\natural \Vert < 10^{-3}$. Additionally, we often normalize the signal $x^\natural$ such that $\Vert x^\natural \Vert$ equals 1. Under these conditions, the convergence result of our method simplifies to $\mathcal{O}(\log (\log (10^3) +\log ( 1 / x_{\min}^\natural ) ))$. This could potentially be significantly smaller than the result from [28], $\mathcal{O}( \log ( \log n^{s^2} ) + \log ( 1 / x_{\min}^\natural) )$. This suggests that our proposed method may have a faster convergence rate under the given conditions. In light of your feedback, we will include a detailed discussion regarding this aspect in our revised manuscript. We believe this will provide a clearer understanding of the comparative advantages of our proposed algorithm. > 2. It is also unclear to me why the proposed method has superior performance both in terms of convergence and in terms of the sparse solutions recovered, when both are second-order methods. **Reply:** Thank you for your insightful comment. While we cannot provide an exact answer to this question, we have some insights that might explain the observed behavior. Both our method and the Hard Thresholding Pursuit (HTP) in [28] could indeed be viewed as second-order algorithms. However, HTP does not explicitly construct the Hessian and Newton direction. This difference could be the cause of the superior performance of our algorithm, both in terms of convergence speed and the quality of sparse solutions recovered. It should be noted that the explicit construction of the Newton direction brings significant challenges in theoretical analysis. This results in a suboptimal sample complexity during the refinement stage of our algorithm's theoretical convergence. To achieve a tighter sample complexity during the refinement stage, a more advanced analytical technique would be needed. We recognize the value of discussing this matter in greater detail. Therefore, we will include a comprehensive discussion on this topic in our revised manuscript. We believe this will provide a clearer understanding of the comparative advantages of our proposed method. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My score remains unchanged. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the valuable time and effort you've invested in reviewing our work. We understand and respect your decision to maintain the original score. If there are any further comments, queries, or suggestions you wish to convey, please feel free to contact us.
Summary: This paper focuses on the sparse phase retrieval problem and introduces an efficient second-order algorithm based on Newton‘s method. The algorithm aims to recover sparse signals and offers a quadratic convergence rate while maintaining the same per-iteration computational complexity as first-order methods. Experimental results demonstrate that the proposed algorithm outperforms popular first-order methods in terms of convergence rate and success rate in recovering the true sparse signal. Strengths: 1. The authors' algorithm exhibits a lower complexity per iteration and a higher convergence rate compared to popular first-order methods. It is noteworthy that this is the first algorithm to establish a quadratic convergence rate. 2. The experimental results clearly illustrate the superiority of the proposed algorithm. 3. The paper effectively communicates the motivation behind the development of the second-order algorithm and highlights the complexity reduction achieved by restricting Newton's step to a subset of variables. Weaknesses: 1. The authors mention two prevalent loss functions but do not provide an explanation regarding the difference between these functions in the numerical experiments. It would be beneficial if the authors clearly explain the distinction between the two functions, particularly why the first function is used for initialization and the second one is used in Newton's update. 2. Equation 12 introduces J_{k+1}, which seems to be highly dependent on the choice of S_0, the initial support. This raises concerns about the algorithm's sensitivity to the initial point. It would be valuable for the authors to address this issue and discuss the potential impact of the initial point on the algorithm's performance. 3. Regarding the overall contribution, this paper focuses on approximating the objective function using a quadratic function, which can be limited. Also, this paper may be interested to only a few people attending this conference. Technical Quality: 3 good Clarity: 3 good Questions for Authors: More extensive experiments would help. E.g., when designing the experiments for unknown sparsity, it would be better to try different inputs for the sparsity levels. How important is the initialization step? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. More extensive experiments would help. E.g., when designing the experiments for unknown sparsity, it would be better to try different inputs for the sparsity levels. **Reply:** Thank you for your constructive suggestions. We have conducted an additional experiment to address your concerns regarding the handling of unknown sparsity. In the table below, we consider scenarios with unknown sparsity. We input various sparsity levels into each algorithm and compare the success rates of various algorithms in recovering the signal. In these experiments, the underlying signal has a sparsity of 30, a signal dimension of 3000, and the number of measurements is 2000. We excluded ThWF from the comparison because it does not require input sparsity. Our observations indicate that CoPRAM, HTP, and our proposed algorithm demonstrate greater robustness to changes in input sparsity compared to SPARTA. | Input sparsity | 10 | 20 | 30 | 50 | 70 | 100 | 150 | 200 | 250 | 300 | |----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|-----------| | CoPRAM | 0 | 0 | 1 | 1 | 1 | 1 | 0.75 | 0.09 | 0 | 0 | | HTP | 0 | 0 | 1 | 1 | 1 | 1 | 0.71 | 0.22 | 0.02 | 0.01 | | SPARTA | 0 | 0 | 1 | 1 | 1 | 0.09 | 0 | 0 | 0 | 0 | | Proposed | 0 | 0 | 1 | 1 | 1 | 1 | 0.93 | 0.85 | 0.76 | 0.66 | Once again, we thank the reviewer's insightful feedback. We will include these discussions and experiments in our revised manuscript. > 2. How important is the initialization step? Equation 12 introduces J_{k+1}, which seems to be highly dependent on the choice of S_0, the initial support. This raises concerns about the algorithm's sensitivity to the initial point. It would be valuable for the authors to address this issue and discuss the potential impact of the initial point on the algorithm's performance. **Reply:** Thank you for your insightful comment. We agree that the robustness of the algorithm to the initial point is a critical aspect. In response, we have conducted an additional experiment comparing different common initialization methods. **The results are provided in the attached PDF.** These results demonstrate that our algorithm performs well under various initial conditions. We want to highlight that our theoretical analysis necessitates the initial point to satisfy the condition $\mathrm{dist} (x^0, x^\natural) < \gamma \Vert x^\natural \Vert$ for any $\gamma \in (0,1)$. This condition can be ensured by sparse spectral initialization with a sample complexity of $\mathcal{O} (s^2 \log n)$, with a probability of at least $1 - 8 m^{-1}$. Our theoretical analysis shows that if the initial point fulfills this condition, that is, if the distance is sufficiently close to the underlying signal, the convergence of our algorithm can be guaranteed, without requiring additional conditions on the initial support. We appreciate your insightful comments and feedback. These discussions and the corresponding experimental results will be included in our revised manuscript. --- Rebuttal Comment 1.1: Title: Thansk for your response Comment: It is great to see that the proposed algorithm is more robust on the sparsity as long as the number is larger than the exact one. I did not find the pdf for the initialization results. Could you point to me where to find it? --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and for recognizing the robustness of our proposed algorithm with respect to sparsity. To address your inquiry about the initialization results, they are included in the PDF attached to the "**Author Rebuttal by Authors**" section, located at the beginning of our response. The results are presented in Figure 2, where we compare the phase transitions of our algorithm using three different initialization methods: SPI [R1], which we adopted in our initial submission, THI [R2], and HWFI [R3]. Our analysis indicates that our algorithm consistently performs well under each of these initialization methods. Interestingly, we noticed a slightly more robust performance of our algorithm when initialized using SPI and THI compared to HWFI. In summary, our algorithm shows robust performance across a range of initialization methods, as demonstrated by our empirical results. Theoretically, our algorithm is guaranteed to converge to the ground truth provided that the initial point meets the condition $\mathrm{dist} (x^0, x^\natural ) < \gamma \Vert x^\natural \Vert$ for any $\gamma \in (0,1)$. This can be ensured under a sample complexity of $\mathcal{O}(s^2 \log n)$ with a probability of at least $1 - 8 m^{-1}$. Importantly, our primary contributions lie in the refinement stage, where we introduce a novel second-order algorithm based on Newton projection and establish non-asymptotic quadratic convergence to the ground truth. We greatly appreciate your insightful feedback. These discussions and experimental results will be included in our revised manuscript. We trust this clarifies the location and the details of the initialization results. Please do not hesitate to reach out if any further clarification is required on this or any other matter. References: [R1] G. Jagatap, and C. Hegde. Sample-efficient algorithms for recovering structured signals from magnitude-only measurements. IEEE Transactions on Information Theory, 65(7):4434– 4456, 2019. [R2] T. T. Cai, X. Li, and Z. Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016. [R3] F. Wu, and P. Rebeschini. Hadamard Wirtinger flow for sparse phase retrieval. In International Conference on Artificial Intelligence and Statistics, pp. 982-990, 2021.
Summary: The work proposes a new algorithm for phase retrieval of sparse signals. Specifically, it focuses on a faster algorithm targeting quadratic convergence with the same number of measurements that are also needed in other algorithms. A proof of a quadratic convergence rate is established and experments illustrate the benefit also in experiments. Strengths: The paper focuses on aspects in phase retrieval that are often ignored. In particular, a proofable faster convergence rate has not been the focus of other works so far. It is well-written and easy to follow. It may well become a new standard for phase retrieval (or a starting point for other similar algorithms) if other researchers can reproduce the excellent performance. Weaknesses: The novelty is limited in the sense that second order algorithms are well known. However, adaptation and convergence proof for the phase retrieval setting are indeed novel and interesting. It is not clear why this subset of existing algorithms has been used for the experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are there any existing second order algorithms for phase retrieval (maybe even without convergence proof)? Why were these specific existing algorithms used for comparsion? Does increasing the maximum number of iterations in the definition of "successful recovery" change the performance of the various algorithms in Figure 2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors present no drawbacks of their method compared to the existing algorithms. In particular, the fact that existing algoritms need fewer measurements for refinement but perfom worse in the phase transition Figure 2 is surprising. Eventually, this is an artifact of the restriction to maximally 100 iterations for success (if the initialization is bad, significantly more iterations might be needed and could still make an algorithm successful, although for a significant computational cost). It is strange that a faster algorithm is in this sense also more "robust". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. Are there any existing second order algorithms for phase retrieval (maybe even without convergence proof)? Why were these specific existing algorithms used for comparsion? **Reply:** Thank you for your insightful comment. You are correct that there are a few second-order algorithms for phase retrieval and sparse phase retrieval, as outlined in [R1,R2,R3,R4]. Among these, [R1] and [R2] concentrate on phase retrieval, while [R3] and [R4] target sparse phase retrieval. Notably, [R3] does not establish theoretical guarantees for convergence to the true signal, which underscores the necessity and novelty of our work. [R4] introduces a second-order algorithm with theoretical guarantees, which is compared with our algorithm. The algorithms we chose for comparison in our paper are considered state-of-the-art for sparse phase retrieval and offer theoretical guarantees. Our proposed second-order algorithm not only maintains the same per-iteration computational complexity as popular first-order methods but is also the first to establish a quadratic convergence rate for sparse phase retrieval. We will ensure to clarify this point in our revised manuscript to highlight the unique contributions of our work over existing second-order methods. References: [R1] B. Gao and Z. Xu, “Phaseless recovery using the gauss–newton method,” IEEE Transactions on Signal Processing, vol. 65, no. 22, pp. 5885–5896, 2017. [R2] C. Ma, X. Liu, and Z. Wen, “Globally convergent levenberg-marquardt method for phase retrieval,” IEEE Transactions on Information Theory, vol. 65, no. 4, pp. 2343–2359, 2018. [R3] Y. Shechtman, A. Beck, and Y. C. Eldar, “Gespar: Efficient phase retrieval of sparse signals,” IEEE Transactions on signal processing, vol. 62, no. 4, pp. 928–938, 2014. [R4] J.-F. Cai, J. Li, X. Lu, and J. You, “Sparse signal recovery from phaseless measurements via hard thresholding pursuit,” Applied and Computational Harmonic Analysis, vol. 56, pp. 367–390, 2022. > 2. Does increasing the maximum number of iterations in the definition of "successful recovery" change the performance of the various algorithms in Figure 2? This is an artifact of the restriction to maximally 100 iterations for success (if the initialization is bad, significantly more iterations might be needed and could still make an algorithm successful, although for a significant computational cost). **Reply:** Thank you for drawing our attention to this point. We agree that increasing the maximum number of iterations can often improve results, particularly when the number of samples is not sufficiently large. To address this, we updated our experiments by increasing the maximum number of iterations to 1000 for each algorithm and also raised the number of independent trial runs to 200 for averaging. We observed a slight increase in the probability of successful recovery for each algorithm in the scenario where $s = 50$, while no consistent increase was observed in the case of $s = 25$. It is worth noting that this adjustment to our experimental settings does not alter our original conclusion. **The results are provided in the attached PDF.** We appreciate your suggestions and will incorporate these updated experimental results in the revised manuscript. > 3. The authors present no drawbacks of their method compared to the existing algorithms. In particular, the fact that existing algoritms need fewer measurements for refinement but perform worse in the phase transition Figure 2 is surprising. **Reply:** Thank you for your insightful comment. We would like to clarify that, although theoretically our algorithm requires a larger sample complexity in the refinement stage for successful recovery when compared to other algorithms, this does not necessarily imply that it also needs more measurements in practice. The larger sample complexity arises when establishing Lemma B.5, where we bound a term related to the Hessian—a term not involved in the theoretical analysis of the compared algorithms. A more advanced technique would be needed in our theoretical analysis to achieve a tighter sample complexity during our algorithm's refinement stage. Additionally, it's important to note that the practical improvement of our algorithm in terms of sample size for successful recovery is not reflected in our theoretical analysis. We appreciate your suggestion and will include a more detailed discussion on this aspect in our revised manuscript. --- Rebuttal Comment 1.1: Title: Thank You for the Increased Review Score Comment: Dear Reviewer, We noticed that you have increased the score for our submission. We would like to express our sincere gratitude for your time and consideration in reviewing our work. We appreciate your positive recognition and are encouraged by it. Please do not hesitate to reach out if you have any other questions. Thank you once again. Best regards, The Authors --- Rebuttal Comment 1.2: Comment: Thank you for the detailed response. Indeed, I did increase my score due to the elaborate answer of all questions raised in my review. I'm looking forward to reading the revised manuscript. --- Reply to Comment 1.2.1: Comment: Thank you for your positive feedback and for recognizing our efforts to address all the questions raised in your review. We greatly appreciate your constructive comments, which have guided us in improving our manuscript.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you for dedicating your time to review our manuscript and for your insightful comments. Your feedback has significantly contributed to improving the clarity and overall quality of our paper. In response to the concerns raised, we have conducted additional experiments and included the results in the attached PDF. We hope that these additional results will address your concerns and strengthen our paper. We look forward to your continued feedback. Pdf: /pdf/35c510d9f823220d1f6f638c1c064305d18fcf6e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach
Accept (poster)
Summary: This paper proposes a novel algorithm for return decomposition with causal treatment. To do reward redistribution, GRD uses factored representations to model the Markovian reward function and dynamics function. Strengths: The writing is clear and easy to follow. It is interesting to see the visualization in section 6.4, especially Fig. 4. Weaknesses: The technical contribution is somehow limited and the stronger experiments are expected. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1 The technical contribution of the paper seems to be limited. The major contribution is the learned generative model. However, the proposed framework is similar to [27], which also uses factored MDP, learns a mask on the factors, and optimizes the model in a generative way. Is it correct to say GRD equals [27] adds markovian reward functions? 2 As for the experiments in Fig.3, GRD does not seem to outperform existing methods by a large margin. Except Halfcheetah, swimmer and humanoid stand up, GRD seems to be rather close to the baselines. Maybe it will be better to further test on some environment with more obvious performance gap. 3 It is interesting to see some horizontal lines in Fig. 4(a), where some dimensions have definite contribution to dimensions #27:54. I think it will be helpful to provide illustrations to correspond this to the mujoco ground truth dynamics, which proves that GRD indeed learns the true and interpretable casual structures. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 24n7 Thank you for your positive support and constructive comments. We provide our point-wise response below. **Weakness 1:** The technical contribution is somehow limited and the stronger experiments are expected. > **Reply 1:** Thank you for your comments. > As complemented by Reviewer 6iFi, 1) "Interpretability: Having interpretable reward redistribution is an advantage over non-interpretable methods. This can be used to diagnose the reason for failures policy optimization." 2) "Reduces the state dimensionality: A very nice side effect of learning causal masks using a dynamics models is that a policy can be learned using very few features of the state. This leads to simpler policies, which could be more robust.", our contribution of providing an interpretable solution for delayed rewards is recognized by other reviewers, and we also reclarify our technical contribution in the Q1. As for the experiments, we provide additional experimental results on other RL training backbones (Figure 3 in attached PDF), on the tasks from Metaworld environment (Figure 4 in attached PDF) to demonstrate the state-of-the-art performance, as well as against noisy state to demonstrate the robustness of GRD. Please refer to the attached PDF. **Q1:** The technical contribution of the paper seems to be limited. The major contribution is the learned generative model. However, the proposed framework is similar to [27], which also uses factored MDP, learns a mask on the factors, and optimizes the model in a generative way. Is it correct to say GRD equals [27] adds markovian reward functions? > **A1:** Thanks for your questions. Our work differs from [27] in three ways. 1)Different task: [27] aims to identify the change factors across different domains and does not address the delayed rewards that we are interested in. 2) Identifiability: For GRD, without the given long-term return, the reward function and the related causal structure are not identifiable. For [27], the reward function is identifiable given the Markovian rewards, which are not observable in our setting. 3) Estimation method: due to the unobservable Markovian reward, we treat the long-term return as the causal effect of all the Markovian rewards within an episode and use the corresponding loss (Eq 4). Additionally, different from [27], we treat the existence of the causal edges as variables, leading to different losses for the minimal-edge assumption (likelihood in GRD and L1 loss in [27]) and different model structures (Gumbel-softmax for sampling in GRD). Overall, although we both utilize DBN, factored representation and model the causal structure as the binary masks, which are the common line for causal modeling [1][2], our work emphasizes the importance of learning an interpretable reward function and is the first to introduce causality into return decomposition. **Q2:** As for the experiments in Fig.3, GRD does not seem to outperform existing methods by a large margin. Except Halfcheetah, swimmer and humanoid stand up, GRD seems to be rather close to the baselines. Maybe it will be better to further test on some environment with more obvious performance gap. > **A2:** Thank you for the observation. We've extended our experiments to include the Meta-World environment, further showcasing our method's performance. Given the time constraints, we do not compare GRD with IRCR as it does not perform as well as RRD. Besides this merit, GRD demonstrates 1) interpretability (our primary objective) and 2) robustness compared to the baseline methods. As an example of 2), upon introducing Gaussian noise to the $28\sim 111$ dimensions of states in Ant-v2, the performance of GRD does not decrease. Please refer to Figure 1 in the attached PDF for the results. **Q3:** It is interesting to see some horizontal lines in Fig. 4(a), where some dimensions have definite contribution to dimensions #27:54. I think it will be helpful to provide illustrations to correspond this to the mujoco ground truth dynamics, which proves that GRD indeed learns the true and interpretable causal structures. > **A3:** While the corresponding real causal structure would be helpful to verify the interpretability of our method, it is inaccessible. Therefore, we provide some evidence to underscore the reliability of the learned causal structure: > - there should not be an edge from the unused dimensions of the state variable to the other variables. (L334-L336) > - the edges from different dimensions of $\boldsymbol a$ to $r$ always exist, corresponding with the reward design of penalising the robot if it takes actions that are too large, measured by the sum of the values of all the action dimensions. (L340-342) > > Apart from above, we explain why the learned redundant edges (horizontal lines) do not impact the policy learning in L336-340: there is no edge from the $28\sim 111$ dimensions of the next state to the reward, i.e., these dimensions do not exist in the identified compact representation, thus having no influence on policy learning. > **Reference** [1] Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, and Kun Zhang. Adarl: What, where, and how to adapt in transfer reinforcement learning. In International Conference on Learning Representations, 2021. [2] Huang, B., Lu, C., Leqi, L., Hernández-Lobato, J. M., Glymour, C., Schölkopf, B., & Zhang, K. (2022, June). Action-sufficient state representation learning for control with structural constraints. In International Conference on Machine Learning (pp. 9260-9279). PMLR. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and the efforts put into the rebuttal. I believe the authors have addressed most of the concerns. I will raise the score and please make the above modifications in the revised paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer 24n7 Comment: Thank you for your positive feedback and recognition of our work. We will include the above modifications in the revised paper.
Summary: This study introduces a novel approach, termed Generative Return Decomposition (GRD), to address a key challenge in reinforcement learning: identifying the state-action pairs that contribute to future, delayed rewards. While many methods redistribute rewards in a non-transparent manner, GRD offers a clear return decomposition by explicitly modeling the contributions of states and actions from a causal perspective. GRD works by first recognizing unobservable Markovian rewards and causal relations in the data generation process. Then, it leverages these to create a compact representation for policy training over the agent's most favorable state-space subset. The researchers provide theoretical proof of the identifiability of the Markovian reward function and underlying causal structure and models. Experimental data also reveal GRD's superior performance and interpretability compared to other methods. However, some limitations exist due to the assumptions made, such as the stationary nature of the reward function, which may not be applicable in dynamic or online RL scenarios. Strengths: The paper excels in presenting Generative Return Decomposition (GRD), an innovative method that improves interpretability in reinforcement learning. GRD successfully addresses the identification of impactful state-action pairs for future rewards. Its effectiveness is supported by both theoretical evidence and practical experiments, demonstrating its superior performance over other existing methods. Moreover, the paper demonstrates a high degree of clarity and coherence, enabling smooth comprehension. Additionally, the explicit description of assumptions contributes to a more profound comprehension of the inherent strengths and weaknesses of the study. Weaknesses: - The quality of the text in Figure 1 could be improved by removing the shadow around the text. The same applies to Figure 2. - Minor typos and errors that need to be edited for the next version of the paper. E.g. in line 172: "provide" -> "provides" - In Section 5, only one policy is considered, which is SAC. How about having the experiments run based on another policy optimization algorithm? What would be the differences in performance and results? - Regarding the last paragraph of Section 5, there are two possible scenarios to train the agent. - First, the generative model is learned while the policy is being updated, in an end-to-end paradigm. - Second, the generative model is first trained (and stays fixed thereafter), then the policy begins to be optimized. In either case, how would that affect the policy training and performance? And how the insights from the GRD interpretations would be changed? - In the experiment section, the visualizations for the learned causal structure are only provided for Ant. Please provide the same type of analysis for other environments. - Considering Figure 4, having GRD, how the agent's robustness and generalizability would be affected? For example, consider the case where there are some anomalies injected into the agent and environment interaction, more specifically changing some values from the state-space. If such anomalies target the features that are less important to the agent, then its performance should not be affected that much, right? If so, could you provide some results in this regard? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer EmbG We thank the reviewer for the comments. Below please see our responses as well as clarifications. **Weakness 1:** The quality of the text in Figure 1 could be improved by removing the shadow around the text. The same applies to Figure 2. > **Reply 1:** Thank you for your suggestion. We will revise the figures in the future version **Weakness 2:** Minor typos and errors that need to be edited for the next version of the paper. E.g. in line 172: "provide" -> "provides" > **Replay 2** Thank you for the comments. We will revise in the future version. **Weakness 3:** In Section 5, only one policy is considered, which is SAC. How about having the experiments run based on another policy optimization algorithm? What would be the differences in performance and results? > **Reply 3:** We provide results training with DDPG and TD3 in Figure 3 in the attached PDF. As the experimental result shows, on the tasks of *HalfCheetah*, GRD consistently outperforms the baseline methods, *RRD-Bias*, *RRD-Unbias*, and *IRCR*, which are modified to run based on the same policy optimization algorithm, DDPG and TD3. We also provide results of *None*, which utilizes the observed delayed reward for policy learning directly. **Weakness 4:** Regarding the last paragraph of Section 5, there are two possible scenarios to train the agent. First, the generative model is learned while the policy is being updated, in an end-to-end paradigm. Second, the generative model is first trained (and stays fixed thereafter), then the policy begins to be optimized. In either case, how would that affect the policy training and performance? And how the insights from the GRD interpretations would be changed? > **Reply 4:** Although these two lines are possible for learning the reward model, we follow the first line and enjoy the following advantages: > 1) We can use the on-training policy to collect data rather than the random policy, which would collect data without diversity. > 2) Apart from that, it is more data-efficient to optimize simultaneously since it avoids collecting data for learning the generative model and the policy separately. > > As for the insight of GRD, it will not be changed, since GRD models the generative process of the environment, which is totally the same for the two different lines. **Weakness 5:** In the experiment section, the visualizations for the learned causal structure are only provided for Ant. Please provide the same type of analysis for other environments. > **Reply 5:** Thank you for pointing that out. We provide the visualization of learned causal structure in *Swimmer-v2*, as shown in Figure 5 in the attached PDF, and more results for other environments will be presented in the future version. > Since the grounded causal structure is not accessible, we verify the reasonability of the learned causal structure by some observations: > - All the edges from different dimensions of $\boldsymbol a$ to $r$ always exist, as shown in Figure 5 (d): *Swimmer-v2* shares the same characteristic that the edges from different dimensions of $\boldsymbol a$ to $r$ always exist, corresponding with the reward design of penalizing the swimmer if it takes actions that are too large, measured by the sum of the values of all the action dimensions. > - According to Figure 5 (b), the first dimension of action (Torque applied on the first rotor) has an impact on the last three dimensions of state (angular velocity of front tip, angular velocity of first rotor, second rotor), which is corresponding with the intuitive that the part that connects to first rotor should be impacted by the first dimension of action. We can get a similar observation for the second action dimension from Figure 5. > - We can observe that all the state dimensions are learned to be connected to the reward; the possible explanation is that in the swimmer robot, any changes of the front tip, or two rotors will impact the position of the robot, potentially influencing the reward. **Weakness 6:** Considering Figure 4, having GRD, how the agent's robustness and generalizability would be affected? For example, consider the case where there are some anomalies injected into the agent and environment interaction, more specifically changing some values from the state-space. If such anomalies target the features that are less important to the agent, then its performance should not be affected that much, right? If so, could you provide some results in this regard? > **Reply 6:** Sure, we provide the results with certain noisy states at the insignificant dimensions of the state, as shown in Figure 1 in the attached PDF. To mimic the anomalies, we introduced independent Gaussian noise (mean: $0$, std: $0\sim 1$) to the dimensions ranging from $28$ to $111$ while evaluating the policy. According to Figure 1, unlike the baseline methods, GRD is unaffected by the injected noises, demonstrating that GRD is more robust than others. That is because the insignificant dimensions are not in the compact representation, which serves as the input of policy. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks for your thorough response. In your response, you referred to the "attached PDF" a few times. By that, do you mean the submitted paper? --- Reply to Comment 1.1.1: Title: Attached PDF Comment: It is attached on the top of this page, which includes additional results and illustrations. Please kindly let us know if this help address your concerns. We are happy to answer any further questions should you have. Link: https://openreview.net/forum?id=w7TyuWhGZP&noteId=Mqm5eQDyt3. --- Reply to Comment 1.1.2: Title: Response to Reviewer EmbG Comment: We hope that we already solve all of your concerns in the response and attached PDF. If you have any additional questions, we are more than willing to provide more clarification. Thank you once again for your time and patience.
Summary: Delayed reward in reinforcement learning is the major challenge in reinforcement learning. The return distribution technique is the direct way to resolve this issue while preserving policy. The existing works redistribute the returns in an uninterpretable manner. In this regard, this paper proposes a GRD which generates the Markovian rewards in delayed reward scenarios. GRD first checks the casual relations of state and actions and from a compact representation using causal generative model. The experiment results show that GRD outperforms the baselines and helps visualization. Strengths: * Experiment results seem promising. Weaknesses: * Explanation on line 345-352 is not sufficient and hard to understand. This experiment section is very important since authors insist that GRD give the interpretable structure of reward. * The methods have to construct casual inference which is only possible when all the states are exactly defined. If the number of states explodes, the parameters to learn casual structure would explode. If states and actions are given, we can construct the casual structure without learning with parameters. Minor * Equations are too messy and hard to understand. Use under bracket in equation 6. * Notations are not familiar. It is hard to understand C^{\cdot -> \cdot}, d^a , d^s (?). The authors have to redefine all the variables step-by-step to improve the presentation of this paper. * The arrow size is not consistent in Figure 2. The arrow to \hat{r}_3 is narrower. Also, arrow directions which come from R are weird. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * I could not understand the experiment results in l345-352. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer ynzt Thanks for your constructive feedback. We provide a point-to-point response below. **Weakness 1:** Explanation on line 345-352 is not sufficient and hard to understand. This experiment section is very important since authors insist that GRD give the interpretable structure of reward. > **Reply 1:** We appreciate your observation, but there seems to be a misunderstanding. The experimental result (Line 345 - 352) aims to demonstrate the accuracy of predicting Markovian reward by our learned model, rather than the interpretable structure of the learned reward function (presented in L327 - 344). We visualize the comparison of decomposed rewards and the ground truth rewards to demonstrate the accuracy of Markovian reward prediction by GRD. As shown in Figure 5, the blue lines, representing redistributed rewards, consistently align with the corresponding red lines, which are the ground truth. This visualization demonstrates that GRD indeed distinguishes the state action pairs with less contribution to the long-term return (episodic reward) from those with more contributions. We will revise this section in the future version to provide a clearer explanation. **Weakness 2:** The methods have to construct causal inference which is only possible when all the states are exactly defined. If the number of states explodes, the parameters to learn causal structure would explode. If states and actions are given, we can construct the causal structure without learning with parameters. > **Reply 2:** Thanks for the question. This work focuses mainly on state-based tasks. 1) For the setting where states are not defined such as image-like input, our work can be applied by employed in the latent state space, which requires learning a latent vector representation, as AdaRL [1] does. 2) For a large number of dimensions of state, this would increase the complexity of the environment dynamics and thus requires more efficient causal discovery methods. 3) As an advantage, by learning the causal structure, we can constrain the optimization of the models over a small subspace of state and action, resulting in a lower requirement of parameters of the neural network. 4) Yes, it is possible to construct causal structures without learning the models, but it requires additional diverse data for causal discovery. Overall, the main focus of this work is to address the delayed rewards by interpretable return decomposition, therefore we consider the basic causal discovery algorithm. **Weakness 3:** Equations are too messy and hard to understand. Use under bracket in equation 6. > **Reply 3:** Eq. 6 is introduced to regulate the sparsity of learned causal structure to avoid trivial solutions. It is achieved by optimizing the parameters towards the direction of the nonexistence of the causal edge. Below, we revise Eq. 6 and hope to provide better delineation and understanding. > > Let $D_i(\boldsymbol{x})=\log P(\boldsymbol{x} _i)$, where $P(\boldsymbol{x}_i)$ is the possibility that the edge $\boldsymbol{x} _i$ exists, . Minimizing $D _i(\boldsymbol{s})$ prevents the causal edge from existing. Then Eq. 6 is, > \begin{array}{ll} L _{\text{sp}}(\phi _{\text{cau}}) = \underbrace{\lambda _1 \sum _i D _i(\boldsymbol{c}^{\boldsymbol s\rightarrow r})} _{\text{state-to-reward}} + \underbrace{\lambda_2 \sum _i D _i(\boldsymbol{c}^{\boldsymbol a\rightarrow r})} _{\text{action-to-reward}} + \\ \underbrace{\lambda _3 \sum _{j \ne i} D _{i, j}(\boldsymbol{C}^{\boldsymbol s\rightarrow \boldsymbol s})} _{\text{state-to-state (excluding self-connections)}} + \underbrace{\lambda _4 \sum _{j = i} D _{i, j}(\boldsymbol{C}^{\boldsymbol s\rightarrow \boldsymbol s})} _{\text{state-to-state (self-connections)}} + \underbrace{\lambda_5 \sum _{j,i} D _{i, j}(\boldsymbol{C}^{\boldsymbol a\rightarrow \boldsymbol s})} _{\text{action-to-state}}. \end{array} > > These five terms are responsible for the sparsity of the causal structures of state-to-reward, action-to-reward, state-to-state (excluding self-connections), state-to-state (self connections) and action-to-state, separately. Here self-connection represents the causal edge from $\boldsymbol{s} _{i, t}$ to $\boldsymbol{s} _{i, t+1}$. **Weakness 4:** Notations are not familiar. It is hard to understand $C^{\cdot \rightarrow \cdot}$, $d^a$ , $d^s$ (?). The authors have to redefine all the variables step-by-step to improve the presentation of this paper. > **Reply 4:** We genuinely appreciate your feedback on this matter, and we will revisit and clarify the symbols in our paper. Here we explain the mentioned notations: > - $\boldsymbol{C}^{\cdot \rightarrow \cdot}$ denotes all the causal masks, *i.e.*, $\boldsymbol{C}^{\cdot \rightarrow \cdot}:=[\boldsymbol{C}^{\boldsymbol{s}\rightarrow \boldsymbol{s}}, \boldsymbol{C}^{\boldsymbol{a} \rightarrow \boldsymbol{s}}, \boldsymbol{c}^{\boldsymbol{s} \rightarrow r}, \boldsymbol{c}^{\boldsymbol{a} \rightarrow r}]$; > - $d^\boldsymbol{s}$ denotes the number of dimensions of state; > - $d^\boldsymbol{a}$ denotes the number of dimensions of action. **Weakness 5:** The arrow size is not consistent in Figure 2. The arrow to $\hat{r}_3$ is narrower. Also, arrow directions which come from R are weird. > **Reply 5:** Thank you for pointing this out. We will revise and ensure consistency in the size and direction of the arrows in Figure 2. **Reference** [1] Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, and Kun Zhang. Adarl: What, where, and how to adapt in transfer reinforcement learning. In International Conference on Learning Representations, 2021. --- Rebuttal 2: Title: Response to Reviewer ynzt Comment: We sincerely hope that we have already solved all your concerns. We are also happy to solve any further questions.
Summary: The paper introduces a new algorithm called Generative Return Decomposition (GRD) for return decomposition with causal treatment. GRD addresses the problem by modeling causal relationships among variables, providing advantages over flat representations. It specifies each state and action as a combination of constituent variables and considers causal relationships within the system. The algorithm utilizes a factored representation similar to Factored MDP, enabling the formation and identification of the Markovian reward function based on causality. Unlike previous approaches, GRD uses a graphical representation to determine the contribution of each dimension of state and action to the Markovian reward. It also explains and models the observed delayed return as a causal effect of the unobserved Markovian reward sequence. The framework of GRD visualizes the causal relationships among environmental variables. The paper proves the identifiability of the underlying generative process and introduces a component-wise learning approach for recovering the causal generative process and redistributing rewards. The learned parameters provide a minimal sufficient representation for policy training, aiding in the effectiveness and stability of policy learning. The main contributions of the paper include the reformulation of return decomposition with a graphical representation, the introduction of GRD for learning the causal generative process, and empirical experiments demonstrating the method's superiority over state-of-the-art approaches in robot tasks with sparse rewards. Strengths: 1 - Interpretability: Having interpretable reward redistribution is an advantage over non-interpretable methods. This can be used to diagnose the reason for failures policy optimization. 2 - Reduces the state dimensionality: A very nice side effect of learning causal masks using a dynamics models is that a policy can be learned using very few features of the state. This leads to simpler policies, which could be more robust. Weaknesses: 1 - Writing: The paper needs a lot of work in explaining the method. Especially section 4 and section 5.1. A figure showing how the causal masks are applied would be a good idea. I am willing to improve my score, if the method explanation is improved. 2 - Experiments: The experiments include only Mujoco tasks. It would be interesting to see how the method behaves on delayed reward Atari environments like Bowling. Missing Related work: [1] Modern hopfield networks for return decomposition for delayed rewards Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1 - How do you decide which trajectory to store in the memory for training? 2 - Why is the dynamics model needed? The reward redistribution should be possible without the dynamics model. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes, the limitations have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 6iFi Thank you for your positive support for our paper! Below we provide a point-wise response to your concerns. **Weakness 1:** Writing: The paper needs a lot of work in explaining the method. Especially section 4 and section 5.1. A figure showing how the causal masks are applied would be a good idea. I am willing to improve my score, if the method explanation is improved. > **Reply 1:** Thank you for your comments. We will revise Sec 4 and Sec 5.1 in the future version to include more clear and detailed explanations, including a figure for the overall framework. Additionally, please refer to Figure 2 in the attached PDF, which illustrates how the causal masks are applied. **Weakness 2:** Experiments: The experiments include only Mujoco tasks. It would be interesting to see how the method behaves on delayed reward Atari environments like Bowling. > **Reply 2:** Thank you for your suggestion. Our method focuses on state-based environments, which Atari does not satisfy. However, to further showcase the applicability of GRD, we provide additional experimental results: > 1) on three tasks from Meta-World, *pick-place-v2*, *push-wall-v2* and *door-lock-v2* (Figure 4) to demonstrate better performance of GRD, compared with the baseline methods. > 2) with different RL backbones (Figure 2), *TD3*, *DDPG*, to show the consistency improvement of GRD on the *HalfCheetah-v2*. > 3) against the Gaussian noise with different standard deviations in the insignificant dimensions of state in *Ant-v2* to demonstrate the more robust performance compared with baselines. During the evaluation, the noise is inserted in the $28\sim 111$ dimensions, which are not used in *Ant-v2*. The performance of GRD is not affected due to correctly identifying the compact representation for policy learning. **Weakness 3:** Missing Related work: [1] Modern hopfield networks for return decomposition for delayed rewards. > **Reply 3:** Thank you for pointing this out. Hopfield-RUDDER improves RUDDER by the replacement of LSTM with a continuous modern Hopfield network, and further employs history compression to facilitate the detection and storage of the key events. However, it still follows the line of RUDDER and shares the drawback of lacking interpretability. We will incorporate the suggested paper in the future version. **Q1:** How do you decide which trajectory to store in the memory for training? > **A1:** We collect trajectories using the on-training policy and subsequently store them in the buffer. During model training, we uniformly sample data from this buffer. **Q2:** Why is the dynamics model needed? The reward redistribution should be possible without the dynamics model. > **A2:** It is true that reward redistribution doesn't necessitate a dynamics model. We construct an ablation version of GRD, GRD wo CR. It only relies on the learned causal structure and reward function to train policy, and does not perform as well as GRD. Incorporating the dynamics model benefits policy learning by helping identify a compact representation: through learning the dynamics model, we gain insights into the causal relationships that dictate the generation of the next state, *i.e.*, how the $\boldsymbol{s} _{t}$ determine the next state $\boldsymbol{s} _{t+1}$ . Such causal knowledge helps identify the minimal sufficient dimensions of the state for policy learning ($\boldsymbol{\tilde{c}} ^{\boldsymbol{s}\rightarrow \pi}$ in Eq. 7), which is called compact representation in our paper. Compact representation is associated with the learned causal structure for generating Markovian reward and varies with the learning of the reward function. Since the supervision signals for policy learning are generated by the learned reward function, the policy learning consistently be optimized within the smallest state space, resulting in a more efficient training procedure. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am increasing my score by 1 point. --- Reply to Comment 1.1.1: Title: Response to Reviewer 6iFi Comment: We appreciate the reviewer for the positive feedback and recognition of our work.
Rebuttal 1: Rebuttal: # Attached PDF Thank you to all the reviewers for your invaluable insights and thoughtful feedback. Your expertise has greatly helped us refine and improve our work. Here we provide five figures in the attached PDF as supplementary of our response: - Figure 1: Evaluation with Gaussian noise. - Figure 2: The illustration of using learnable masks to predict the next state. - Figure 3: Learning curves on *HalfCheetch-v2* with different training backbone, DDPG and TD3. - Figure 4: Learning curves on three tasks from *MetaWorld*, Door Lock, Push Wall, and Pick Place. - Figure 5: Learned causal structure in *Swimmer-v2*. Pdf: /pdf/f311bb81c4aec7a091b86159d893940e951463a6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses a major challenge in reinforcement learning: identifying which state-action pairs contribute to delayed future rewards. They propose a solution called "Return Decomposition" that redistributes rewards from observed sequences while maintaining policy invariance. Unlike other methods, their approach explicitly models state and action contributions from a causal perspective, making it interpretable. The authors introduce a framework called "Generative Return Decomposition (GRD)" for optimizing policies in scenarios with delayed rewards. GRD identifies unobservable Markovian rewards and causal relationships in the generative process. Using this causal generative model, GRD creates a compact representation to train policies efficiently. The paper proves the identifiability of the unobservable Markovian reward function and the underlying causal structure and causal models. Experimental results show that their method outperforms existing techniques, and visualizations demonstrate its interpretability. The source code for their approach is publicly available. Strengths: - The authors provide theoretical proof for the identifiability of the unobservable Markovian reward function and the underlying causal structure. This solidifies the theoretical foundation and robustness of the model. - The GRD method outperforms state-of-the-art methods in experimental results across a range of tasks. This demonstrates its practical effectiveness and application potential. - Visualization of the learned causal structure and decomposed rewards contributes to the interpretability aspect, a valued characteristic in contemporary machine learning. Weaknesses: I have no knowledge about reinforcement learning, and I don't understand why the system assigned me to review papers on this topic. Please disregard my review comments. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I have no knowledge about reinforcement learning, and I don't understand why the system assigned me to review papers on this topic. Please disregard my review comments. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I have no knowledge about reinforcement learning, and I don't understand why the system assigned me to review papers on this topic. Please disregard my review comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Xv6U We appreciate you for reviewing our paper! Thank you for your positive support!
null
null
null
null
null
null
InfoCD: A Contrastive Chamfer Distance Loss for Point Cloud Completion
Accept (poster)
Summary: The paper proposes a contrastive chamfer distance (InfoCD) for point cloud completion. More specifically, the paper shows that minimizing InfoCD is equivalent to maximizing a lower bound of the mutual information between the underlying geometric surfaces, which plays a crucial role in generating and reconstructing detailed object shapes . To verify the effectiveness of the method, extensive experiments are conducted and promising results are obtained. Strengths: 1. The idea is interesting and the paper is well-organized. 2. Extensive experiments are conducted and performances are promising. Weaknesses: 1. Since the real-world scenario is a critical application in point cloud completion, how about the results on the real-world dataset such as KITTI shown in existing works? 2. The evaluation metric in Tab.1 should be L2-CD as well according to the PCN paper, since a square root is calculated in the evaluation code. 3. It is worth comparing the time and memory efficiency among different methods, especially when they are applied to real-world applications or mobile devices. 4. Inconsistency citation names for [25]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments. Below are our responses to the questions arising in the review: **1. Results on KITTI:** Following GRNet (*Xie et. al. "Grnet: Gridding residual network for dense point cloud completion". In ECCV, 2020.*), we take a sequence of Lidar scans from KITTI. We then (1) extract points per frame within the object bounding boxes labeled as cars, (2) transform these incomplete point clouds to the box's coordinates, (3) complete them with a model pre-trained on cars from ShapeNet, and (4) finally transform the outputs back to the world coordinates. The table below lists our results where, again, InfoCD can improve the baselines consistently ("Fidelity" and "MMD" are two distance metrics, the smaller, the better. Please refer to GRNet for more details), demonstrating that InfoCD can generalize well for different datasets (together with the other 4 datasets in the paper). Note that here we still use the default hyperparameters in the original code for fair comparisons. -------------------- Method | Fidelity$\downarrow$ | MMD$\downarrow$ -------------------- FoldingNet | 7.467 | 0.537 **InfoCD+FoldingNet** | **1.944** | **0.333** --------------------- PoinTr | 0.000 | 0.526 **InfoCD+PoinTr** | 0.000 | **0.502** -------------------- **2. Evaluation metric in Table 1:** We follow the paper *Yu et. al. "PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers". In ICCV, 2021.*, and use their evaluation code where L1-CD is used. This is different from the PCN paper. **3. Training time and GPU memory footprint for computational efficiency:** InfoCD has only a few more operations than CD and thus in theory both computational efficiency should be similar. Numerically, for training CP-Net with CD and InfoCD per iteration it takes 0.4239$\pm$0.0019 and **0.4498$\pm$0.0030** second with 1052.627$\pm$0.0374 and **1053.692$\pm$0.0425** MB in GPU memory, respectively. **4. We will check the references to make everything consistent.** --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. If the evaluation code is different from the original PCN paper, why are the avg. metric of PCN shown in Tab.1 is the same as the original PCN paper? Moreover, after double-checking the avg. metric, it seems that the avg. value 9.64 is not equal to the average value of all the eight classes (while it is not for PCN paper), which actually should not be the case since PCN dataset has the equal number of testing data per category. Is there any reason for this, and could the authors double-check the evaluation? --- Reply to Comment 1.1.1: Title: Thanks for your questions! Comment: We use the public code with the paper "PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers" in ICCV 2021. Please refer to https://github.com/yuxumin/PoinTr/tree/master where the evaluation code is located at https://github.com/yuxumin/PoinTr/blob/master/tools/runner.py In fact, there are some other papers citing **exactly the same numbers** on PCN as PoinTr and ours. For instance, 1. Wen et. al. "Pmp-net: Point cloud completion by learning multi-step point moving paths." In CVPR 2021. (See Table 2) 2. Xiang et. al. "SnowflakeNet: Point cloud completion by snowflake point deconvolution with skip-transformer." In ICCV 2021. (See Table 1) 3. Zhou et. al. "SeedFormer: Patch Seeds based Point Cloud Completion with Upsample Transformer." In ECCV 2022. (See Table 1) We have tested the code, with **no change** on the evaluation, and got similar numbers. For consistency with other papers, we decided to copy these numbers into our table. We encourage the reviewer to test the code above for verification. -------------------------- **WE NOW FOUND A BUG IN THE EVALUATION CODE THAT WAS RELEASED ONE AND HALF YEARS AGO!!! THANKS A LOT FOR THE REVIEWER'S COMMENTS!!!** The code we are using was released one and half years ago, where we found that the category "Cabinet" was not added into the calculation for average. This is due to a bug in the test json file where "Cabinet" should use "c" instead of "C". After correcting the bug, the average numbers for PCN and ours are 11.27 and 10.84, respectively. Ours is still better. We will correct the numbers in the table as well.
Summary: This paper proposes a contrastive Chamfer distance loss, which introduces contrastive learning into the CD loss. Experiments are conducted on PCN, MVP, ShapeNet-55/34 and ShapeNet-Part datasets, and state-of-the-art results are achieved on these datasets. Strengths: 1. The idea seems reasonable and the overall performance is good. 2. The paper is overall well written and easy to follow. Weaknesses: 1. Since the proposed loss is a supervised learning loss, I am a bit worried about the generalization ability of the proposed method in different datasets. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: When you compare with the original methods, have you retrained these learning-based models on the same datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments. Below are our responses to the questions arising in the review: **Generalization ability on different datasets:** To answer this question within limited time, we test our method on **KITTI**, a real-world dataset. Following GRNet (*Xie et. al. "Grnet: Gridding residual network for dense point cloud completion". In ECCV, 2020.*), we take a sequence of Lidar scans from KITTI. We then (1) extract points per frame within the object bounding boxes labeled as cars, (2) transform these incomplete point clouds to the box's coordinates, (3) complete them with a model pre-trained on cars from ShapeNet, and (4) finally transform the outputs back to the world coordinates. The table below lists our results where, again, InfoCD can improve the baselines consistently ("Fidelity" and "MMD" are two distance metrics, the smaller, the better. Please refer to GRNet for more details), demonstrating that InfoCD can generalize well for different datasets (together with the other 4 datasets in the paper). Note that here we still use the default hyperparameters in the original code for fair comparisons. -------------------- Method | Fidelity$\downarrow$ | MMD$\downarrow$ -------------------- FoldingNet | 7.467 | 0.537 **InfoCD+FoldingNet** | **1.944** | **0.333** --------------------- PoinTr | 0.000 | 0.526 **InfoCD+PoinTr** | 0.000 | **0.502** -------------------- --- Rebuttal Comment 1.1: Comment: Dear Reviewer SCvJ, Thanks for your valuable comments. We hope that our replies have well addressed your concerns about our submission. Please do let us know if you have more questions, and we will try to answer your questions asap. Thanks --- Rebuttal Comment 1.2: Comment: Thanks for the authors' rebuttal. The author didn't answer my question about whether to retrain the models of the original methods. --- Reply to Comment 1.2.1: Comment: Thanks for your concern! **Yes**, we have retrained the models of the original methods to guarantee the numbers can be reproducible.
Summary: This paper proposed a novel metric to measure the similarity between two point sets, which is based on the basic formula of InfoNCE loss and the Chamfer distances. The key idea is to implicitly estimate the MI between the two point sets, and the way to achieve such target is to treat the distance of between points as a measurement of positive and negative samples. Strengths: 1. The reviewer is highly in favor of this paper, as this draft addresses a very fundamental problem in the deep learning of point cloud data, which is the similarity measurement between two point sets. 2. The geometric based CD/EMD metric have been used for years in point cloud completion/reconstruction area, and the proposed InfoCD loss takes one step further to incorporate the idea of mutual information. The formulation of InfoCD, which is the combination of InfoNCE and Chamfer distance, technically makes sense and is very easy to follow. 3. The experiments are great, which covers most of the recent work and almost all popular benchmark in point cloud completion. The improvement achieved by InfoCD is non-trivial, and the generalization ability and the performance gain is also impressive. 4. Potential performance gain on a lot of related tasks such as 2D-3D reconstruction, unsupervised learning, shape generation can benefit from this work. The potential application of InfoCD may not be limited in point cloud completion task. Weaknesses: 1. The convergence analysis is relatively weak, as only the experimental proves is provided instead of a more mathematical proof. This does not trouble the reviewer a lot, because the experimental result compared with CD loss looks very good and convincing. 2. It is a little bit pity that only the point cloud completion task is discussed. Maybe one or two applications on other tasks could provide more evidence on the generalization ability and the effectiveness of the InfoCD loss. For example, 2D-3D reconstruction. In all, the reviewer does not see much weakness in this draft. It is a high-quality paper in terms of the point cloud completion research. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The author has fully addressed the limitations in the draft. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments. Below are our responses to the questions arising in the review: **1. Convergence:** We thank the reviewer for understanding. We will try to develop a convergence theory in our future work. **2. Generalization ability on new tasks:** Given the limited time, we add a new task of Single-View Reconstruction (SVR) that aims to reconstruct a point cloud from an image of the underlying object. Following 3DAttriFlow (*Wen et. al. "3D shape reconstruction from 2D images with disentangled attribute flow". In CVPR, 2022.*) and SnowflakeNet (*Xiang et. al. "Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer". TPAMI, 2022.*), we sample 30k points from the watertight mesh in ShapeNet as the ground truth, and output 2048 points for evaluation based on per-point L1-CD$\times10^2$. We replace CD in SnowflakeNet with InfoCD for training, and list average comparison results below, demonstrating that InfoCD can generalize well for different tasks. ----------------------------- Method | Ave. ----------------------------- 3DAttriFlow | 3.02 SnowflakeNet | 2.86 **InfoCD + SnowflakeNet | 2.73** ----------------------------- --- Rebuttal Comment 1.1: Comment: Good work. I have no further question and would like to remain my ratings. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your support!
Summary: The paper introduces a novel loss function called InfoCD for point cloud completion tasks. InfoCD maximizes a lower bound of the mutual information, aiming to improve the quality of the completed point clouds. The experimental results presented in the paper demonstrate promising outcomes, indicating the effectiveness of the proposed approach. Strengths: - Exploring the improved CD loss as a research direction for point cloud reconstruction shows promise and holds significant potential. - The paper is well-written and effectively communicates its ideas, making it easy to comprehend and follow. - The experimental setup and execution in the paper are adequate, resulting in promising outcomes and supporting the proposed approach. - The visual results presented in the paper demonstrate good quality, further reinforcing the effectiveness of the proposed method. Weaknesses: 1. I observed a discrepancy between the equation presented in the paper and the implementation found in the provided demo code. This discrepancy, potentially caused by missing brackets and misrepresentation of the intended InfoCD, leads to a mismatch between the experimental results and the proposed idea. Consequently, concerns arise regarding the accuracy of the reported findings and the overall effectiveness of the proposed approach. I have reviewed the provided code in 'loss_utils.py' and compared it to the equation mentioned in Section 3.2. I have identified a discrepancy in lines 197 and 198. In the code, the calculation for l_infoCD(x_i, y_i) is implemented as "- torch.log(torch.exp(-0.2 * d1) + 1e-7 / torch.sum(torch.exp(-0.2 * d1) + 1e-7,dim=-1).unsqueeze(-1))". However, it appears that there are missing brackets in the expression. The correct calculation in Python should be "- torch.log((torch.exp(-0.2 * d1) + 1e-7) / torch.sum(torch.exp(-0.2 * d1) + 1e-7,dim=-1).unsqueeze(-1))" when the value of \tau is equal to 5. Therefore, the issue lies in the missing brackets in the code implementation, which deviates from the equation provided in Section 3.2. 2. Based on last question, I have concerns regarding the fairness of the comparison. Specifically, I would like to inquire whether both the baseline and the baseline + InfoCD models were trained and tested using identical settings, including training hyperparameters and the number of training epochs. My worry stems from the possibility that the observed improvement may be attributed to factors such as updates to the codebase, variations in training hyperparameters, or even longer training durations. This concern is amplified by the existence of a bug affecting the loss function in the provided code. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for valuable comments. Below we respond to reviewer concerns. **1. uploaded loss_utils.py code is incorrect:** Nice catch. Indeed this was not what was implemented. We maintained several versions and we apologize for accidentally uploading the incorrect version. **The implemented loss is exactly the regularized loss as in Eq. 6** used in all our experiments. \ *The correct loss_utils.py* version has in lines 197 (as well as line 235), 198: >distances1 = 0.2 * d1 + 1e-7 * torch.log( torch.sum( torch.exp( -0.2 * d1 ), dim=-1 ).unsqueeze(-1) )\ >distances2 = 0.2 * d2 + 1e-7 * torch.log( torch.sum( torch.exp( -0.2 * d2 ), dim=-1 ).unsqueeze(-1) ) \ >where d1 and d2 denote two distance metrics in the original CD, and 1e-7 is a trade-off constant for the regularizer, which is universal across different datasets and networks. We emphasize that this is exactly Eq. 6 (modulo penalty parameter): $\mathcal{L_{\text{InfoCD}}}(x_i,y_i)=\frac{1}{\tau} \mathcal{L}_{\mbox{\small CD}}(x_i,y_i) + \lambda \mathcal{R}(x_i,y_i) $ >where we have set $\lambda=1e-7$ in all our experiments. Notice that the first term on the right, $\frac{1}{\tau} \mathcal{L}_{\mbox{\small CD}}(x_i,y_i)$ corresponds to python code "0.2*d1" or equivalently "-torch.log(torch.exp(-0.2 * d1), dim=-1 ).unsqueeze(-1)" (This equivalent term bears similarity to the incorrect version and we believe is the source of our mistake.). The second term $\mathcal{R}(x_i,y_i)$ corresponds to python code "torch.log( torch.sum( torch.exp( -0.2 * d1 ), dim=-1 ).unsqueeze(-1) )". *Intuitive issue with uploaded version*: >Evidently, the misplaced brackets reviewer identified and our incorrectly uploaded version may not work. This is because "- torch.log(torch.exp(-0.2 * d1) + 1e-7 / torch.sum(torch.exp(-0.2 * d1) + 1e-7,dim=-1).unsqueeze(-1))" suppresses contribution from the additional term rendering the loss to behave similarly to vanilla CD loss (Note that the distances are normalized to the unit interval). We encourage the reviewer to verify our implementation by running the demo code by replacing the indicated lines. **2. Validation of our experimental results:** We ran our experiments with InfoCD based on the **default** hyperparameters such as learning rate and the maximum number of epochs in the public code, but only replacing the CD loss with our InfoCD loss. This has been clearly stated in the paper in L264-266: **"Hyperparameters such as learning rates, batch sizes and balance factors in the original losses for training baseline networks are kept consistent with the baseline settings for fair comparisons."**. --- Rebuttal Comment 1.1: Title: Further concern about the equ 6 and the implementation Comment: Thank you for the author's feedback. Based on the response, I have additional concerns regarding Equation 6 and its implementation. It appears that the paper does not provide any information about the value of lambda or an explanation for its usage. Equation 6 lacks a weighted regularization term. Moreover, the derivation of Equation 6 appears to be a result of simplifying Equation 5. The presence of lambda suggests a power operation of 1e-7 on the denominator, which requires further clarification. On the other hand, in point completion tasks, the typical magnitude of CD-L1 loss is around 1e-3, why add such a small weight to the regularization term? --- Reply to Comment 1.1.1: Title: Thanks for your concerns Comment: **1. Eq. 6 lacks a weighted regularization term. Moreover, the derivation of Eq. 6 appears to be a result of simplifying Eq. 5:** Below are our responses: *(1) Decomposition in Eq. 6 as a Motivation for Scaling Regularization Term.* Eq. 6 shows that we can split our InfoCD loss into two components --- a CD loss plus a regularizer term. Drawing inspiration from this decomposition, we could re-weight the regularization term as is typical in practice. *(2) Conceptual Approach.* Nevertheless, we can conceptually ground the inclusion of penalty term and derive an expression with the penalty $\lambda$ by drawing direct inspiration from a line of recent works (see references [1, 2, 3] below for instance). These works propose an alternate variant of differential weighting between positive and negative pairs for the InfoNCE loss and show improved empirical results. Motivated by these works, let us modify our expression in Eq. 1. Namely, consider the following modified expression for Eq. 1, where we differentiate the pairs in the nominator and denominator by different temperature parameters $\tau', \tau$: $\mathcal{L}_{\text{InfoNCE}} = -\sum_x\log f(x, x^+, x^-; \tau', \tau)$ where $f(x, x^+, x^-; \tau', \tau) = \frac{\exp\left[-\frac{1}{\tau'}d(x^+,x;\theta)\right]}{\exp\left[-\frac{1}{\tau}d(x^+,x;\theta)\right]+\sum_{x^-}\exp\left[-\frac{1}{\tau}d(x^-,x;\theta)\right]}$, $\tau\geq\tau'>0$ and $d$ denotes a distance function. We can easily see that Proposition 1 for InfoNCE still holds for this modified formulation. Now, we can re-derive the expression in Eq. 5. Specifically, $ \ell_{\text{InfoCD}}(x_i,y_i) = \frac{1}{\tau'|y_i|}\sum_k\min_jd(x_{ij},y_{ik}) + \log\left(\sum_k\exp\left[-\frac{1}{\tau}\min_jd(x_{ij},y_{ik})\right]\right) = \frac{1}{\tau|y_i|}\sum_k\min_jd(x_{ij},y_{ik}) + \lambda\log\left(\sum_k\exp\left[-\frac{1}{\tau}\min_jd(x_{ij},y_{ik})\right]\right), $ leading to $\mathcal{L_{\text{InfoCD}}}(x_i,y_i) = \frac{1}{\tau'} \mathcal{L_{\text{CD}}}(x_i,y_i) + \mathcal{R}(x_i,y_i) \propto \frac{1}{\tau} \mathcal{L}_{\text{CD}}(x_i,y_i) + \lambda \mathcal{R}(x_i,y_i),$ where $\lambda = \tau'/\tau$ is the ratio of the temperatures. Furthermore, the rest of the analysis (Lemma 1) in the paper follows in a straightforward way with this modified loss. *(3) References:* [1] Wang and Isola. "Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere". In ICML, 2020. [2] Chuang et. al. "Debiased Contrastive Learning ". In NeurIPS, 2020. [3] Robinson et. al. "Contrastive Learning with Hard Negative Samples". In ICLR, 2021. **2. The presence of $\lambda$ suggests a power operation of 1e-7 on the denominator, which requires further clarification. On the other hand, in point completion tasks, the typical magnitude of CD-L1 loss is around 1e-3, why add such a small weight to the regularization term?** Please note that $$\log\left(\sum_k\exp\left[-\frac{1}{\tau}\min_jd(x_{ij},y_{ik})\right]\right) \approx \log|y_i|$$ where $|y_i|$ denotes the number of points in the target point cloud $y_i$, and $|y_i|\approx1e4$ in our experiments for all the datasets. Therefore, we have $\log|y_i|\approx10$. Now, by substituting $\tau=5, \lambda=1e-7$ (as well as the typical magnitude of CD loss $1e-3$ as the reviewer suggested) into our InfoCD loss , we can easily calculate the magnitudes of the first and second terms in InfoCD are about $1e-4$ and $1e-6$, respectively, which is reasonable as typical regularization. In fact, we observe that the value of $\lambda$ is pretty robust within a large range. For instance, on the PCN dataset, using $\lambda=1e-3$ we can achieve 6.66 and 6.53 for InfoCD+PointAttN and InfoCD+SeedFormer, respectively, in contrast to 6.65 and 6.48 using $\lambda=1e-7$ in the paper. For simplicity, we set $\lambda=1e-7$ in all our experiments.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their valuable comments. In summary, 1. We have responded to all the reviewer comments and uploaded a PDF file to show the point correspondences in training over epochs; this is based on the comment by Reviewer Lt2U. 2. We have added results from two new experiments: (1) a task of single-view reconstruction (SVR) for Reviewers Lt2U and eAft, and (2) KITTI results for Reviewers SCvJ and hcgP. 3. We have clarified a mistake in our demo code, as pointed out by Reviewer mFMW. **We will release our code for reproducing our experimental results upon acceptance.** Pdf: /pdf/3c56699fccdc999c7c4ce297eb6c7728186ff58b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a contrastive Chamfer distance to tackle the point cloud completion problem. The proposed CD loss maximizes the lower bound of the mutual information between two point cloud-based geometric surfaces, which leads to a more robust measurement of the similarities between two point clouds. On the other hand, the proposed CD loss is equivalent to adding a regularizer to the scaled CD, enabling a relaxed point alignment. Experiments of replacing CD with the proposed InfoCD in many state-of-the-art point completion models on MVP and some ShapeNet-based datasets show the good performance of the method. Strengths: - The introduction of the paper is concise and convincing. The authors have identified existing problems in the current research, and propose a solution based on these findings. - The authors have conducted extensive experiments for the point cloud completion task on various datasets and have used the proposed loss function in different state-of-the-art methods. - Although the analysis of the proposed CD loss is limited to point cloud completion tasks, the potential usage of the proposed loss function might be broader in various point cloud tasks. Weaknesses: - The authors claimed that the CD tends to have a hard constraint that points in the source point cloud should exactly lie on the points in the target point cloud. In contrast, InfoCD does not have this hard constraint. However, since usually the number of points in complete and partial point clouds is imbalanced, CD may not have this hard constraint. I wondered if a simple truncated CD would already solve this problem. - In Line 179, “with another assumption that the matched point pairs keep unchanged over iterations”, which may not always be true. Any intuitions or experimental validations? - Ablation study on $\tau$, lr is incomplete and confusing. The limitation is discussed but lacks some quantitative results for the efficiency analysis. The authors are encouraged to discuss the efficiency of the proposed method compared to the original CD loss. - Table 1, 2, 3, 4 have inconsistent method comparisons. Could the authors provide more explanations? - The application of the proposed CD loss is limited to point cloud completion. However, a broader discussion of other point cloud tasks could be discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the detailed comments above. My main concerns are some unclear arguments and experiment settings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have mentioned some limitations in the conclusion section. However, a more detailed discussion is expected. Please see the detailed comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable comments. Below are our responses to the questions arising in the review: **1.1 A hard constraint on matching for CD and InfoCD:** We think that the reviewer may misunderstand this part. Firstly, we do not claim that InfoCD does not have such a constraint. In fact, due to the nature of nearest neighbor matching, InfoCD does have the hard matching constraint, i.e., one point in the source point cloud has a single match in the target point cloud. This is the same as CD. Secondly, both CD and InfoCD are applied to the reconstructed point cloud (NOT the input PARTIAL point cloud, as the review thought) and the complete point cloud, as distance metrics. **1.2 CD vs. Truncated CD vs. InfoCD:** Given the limited time, we implemented Truncated CD (T-CD) as T-CD = $\min(CD, thd)$ where $thd\in\{0.2, 0.4, 0.6, 0.8\}$ denotes a threshold, and tested T-CD on the ShapeNet-Part dataset used in the paper. This method achieves 4.72, 4.78, 4.88, and 4.75 in terms of L2-CD$\times10^3$, which can be slightly better than CD (4.82) but significantly worse than InfoCD (4.01). **2. Experimental evidence for the assumption in L179:** In the **newly uploaded PDF file** (please check the attachment), as a demonstration we plot some point correspondences during training over epochs (10,70,130), as shown in Fig. 1 where the blue points are ground truth and the red ones are predictions. InfoCD is able to help stabilize (i.e., keep unchanged) the (correct) correspondences much faster in training. **3. Training time and GPU memory footprint for computational efficiency:** InfoCD has only a few more operations than CD and thus in theory both computational efficiency should be similar. Numerically, for training CP-Net with CD and InfoCD per iteration it takes 0.4239$\pm$0.0019 and **0.4498$\pm$0.0030** second with 1052.627$\pm$0.0374 and **1053.692$\pm$0.0425** MB in GPU memory, respectively. **4. Inconsistancy in comparable methods in Table 1, 2, 3, 4:** We follow the literature, and aim to compare methods with public code as many as possible. Table 1 provides the results on PCN which is the most popular benchmark in the task of point cloud completion. Table 2 focuses on the diversity of models on MVP. For fair comparisons, we use the code public with the dataset to implement all the networks used for MVP. Tables 3 and 4 focus on Shapenet 55/34 that are recently proposed as benchmarks and smaller than the other datasets. We follow the previous works and choose a few representative networks to compare. On all the datasets with all the networks, our InfoCD consistently improves the performance. **5. A new point cloud task --- Single View Reconstruction (SVR):** Given the limited time, we add a new task of SVR that aims to reconstruct a point cloud from an image of the underlying object. Following 3DAttriFlow (*Wen et. al. "3D shape reconstruction from 2D images with disentangled attribute flow". In CVPR, 2022.*) and SnowflakeNet (*Xiang et. al. "Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer". TPAMI, 2022.*), we sample 30k points from the watertight mesh in ShapeNet as the ground truth, and output 2048 points for evaluation based on per-point L1-CD$\times10^2$. We replace CD in SnowflakeNet with InfoCD for training, and list average comparison results below, demonstrating that InfoCD can generalize well for different tasks. ----------------------------- Method | Ave. ----------------------------- 3DAttriFlow | 3.02 SnowflakeNet | 2.86 **InfoCD + SnowflakeNet | 2.73** ----------------------------- --- Rebuttal Comment 1.1: Comment: Dear Reviewer Lt2U, Thanks for your valuable comments. We hope that our replies have well addressed your concerns about our submission. Please do let us know if you have more questions, and we will try to answer your questions asap. Thanks --- Rebuttal Comment 1.2: Title: Response to authors Comment: Thanks for the authors' efforts in answering these questions. I have read all the comments by all reviewers and authors' responses. I think there remain some concerns in this work. I am optimistic that the work targets the fundamental problem in the loss function. However, the potential positive impact of point cloud-based research is questioned since the work still lacks solid evidence of generalizing to real-world applications. I appreciate that the authors have provided KITTI results that may prove---since only several numbers were provided---that InfoCD is effective in completing the car data in KITTI datasets. But it does not indicate the broader real-world applications of the proposed method. For example, the simplest task the authors could perform is a registration on KITTI as PCN was doing. Nevertheless, I think the paper could make it to the NeurIPS venue because of its effort in trying to improve the widely-used Chamfer loss, the experiments conducted on the point cloud completion task on various datasets, and the good performance it achieves on these datasets. However, I do not agree with reviewer eAft that the paper should be rated as "very strong accept" since the evaluation and the arguments still have some flaws. In response to the authors' response: 1. The "hard constraint" I referred to is mentioned in Figure 5 and related text. I understand that the proposed method still has this "constraint", but the authors claim the "constraint" to be less strict. In particular, the authors mentioned in Line 193-201 that CD forces the error between the ground truth and the predicted points to be zero during optimization while InfoCD only forces the error to be sufficiently small. However, in the real-world problem, the optimization of CD will not reach 0 error as expected yet yield good performance. The regularized CD as proposed in the paper converges to a lower value (as shown in Figure 5) but does not necessarily mean a better performance. I was thinking about whether the authors could rephrase these arguments and present a better way to explain the "weaker constraint" of the proposed CD. For example, a visualization of the real experiment data could be more convincing instead of showing illustrated Figure 5 (b) and (c). 2. The provided visualization does not show evidence that the correct correspondence will keep unchanged. I may be misunderstanding the figure. But I hope the authors could provide a better explanation. 3. The computational time and memory consumption seemed reasonable. 4. I have read the communication between the authors and the reviewer hcgP. I think the authors could further include a clearer explanation in the paper to clarify the experiment settings and the findings of the bug in the previous paper. 5. As I mentioned above, I appreciate the authors' efforts. But I do not think this will be sufficient evidence of broader applications. In conclusion, I would like to keep my rating.
null
null
null
null
null
null
Compositional Sculpting of Iterative Generative Processes
Accept (poster)
Summary: This paper proposes an approach of Compositional Sculpting for iterative generative models, including GFlowNets and defusion models. The model uses classifier guidance to sample from the target posterior distribution composed of pre-trained base models. The paper also proposes a training algorithm for the classifier. The approach is validated by empirical analyses of an image dataset and molecular generation. Strengths: - The method is general enough for different GFlowNets and defusion models. - The empirical results validate the method. Weaknesses: (1) It would be convincing to have more complicated experiments, especially for image data. Colored MNIST might be too small and simple. (2) More clarity might be helpful for the following points. (2A) What is the model in line 193? "Under the model we have introduced the variables y_1, ... y_n are dependent given a state s in S, but, are independent given a terminal state x in X." (2B) I am confused while reading line 209. The sampling scheme is 1) sample y from its prior 2) sample tau given y. But the following sentence says sampling y given tau. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weakness section. Typos - Line 217: train train Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper has a limitation section. It seems not to have a dedicated section for social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper and your feedback. We’ve provided some clarification in response to your questions below. > What is the model on line 193? Equations (2)-(4) constitute a graphical model over the variables $x$ and $y_1, \dots, y_n$. We introduce a specialization of this model for GFlowNets in the first paragraph of section 3.1. This is the model we refer to on line 193 in section 3.2. We will clarify in the text that this refers specifically to the specialized model for GFlowNets. > Confused when reading line 209. The sampling scheme is 1) sample y from its prior 2) sample tau given y. But the following sentence says sampling y given tau. We allow conditioning on multiple observations. Thus, there are multiple i.i.d. variables $y_1, \dots y_n$, one for each observation. Line 209 explains that when generating samples to train on, the first observation $\widehat y_1$ is sampled from its prior, $\widehat \tau$ is sampled given $\widehat y_1$, and the remaining observations $\widehat y_2, \dots \widehat y_n$ are then sampled given $\widehat{\tau}$. > It seems not to have a dedicated section for social impact. Most societal impact issues are common to other work on generative models. However, our work has a positive impact in terms of opening the door towards efficient compositions of existing models, enabling wider applicability and computational savings. We will expand on this in a dedicated paragraph in the conclusion. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I raised the score. --- Rebuttal 2: Title: Adjust score after rebuttal Comment: Dear Reviewer m7vz: Thanks for your review. Please read the rebuttal and start discussing points of disagreement raised by the authors and you. Most reviewers are towards accepting the paper while you are relatively negative. Please reconsider your score and adjust it accordingly. AC
Summary: The paper describes a way in which, given sequential samplers from multiple probability distributions, a combination of the samplers can be used to sample from a composition of the distributions. To be precise, the sequential samplers are either GFlowNets or diffusion models, the combination of samplers is a weighted combination of action distributions at each intermediate sampling step, and the composition of distributions can be defined by simple soft conjunction and set difference operators. This is demonstrated in toy illustratory experiments and multiobjective molecule synthesis (GFlowNet) and MNIST with digit class and colour attributes (diffusion). Strengths: From the perspective of someone who works on both GFlowNets and diffusion models, this is a very well-written paper. - The text reads naturally and the right amount of detail is given in the main text. There is a good choice of illustrations to help the reader. - The composition of multiple GFlowNets has not been considered before and could be useful, especially in multiobjective problems. The unifying perspective on classifier guidance is also an advantage (but see below). - Code is provided, a nice addition to the paper. - I checked the GFlowNet-related math and believe it to be sound. Weaknesses: - Line 234, 255, 634, 643, maybe others: typo "GFLowNet" - It would be good to explain why / state as a subclaim that (8) is a policy (i.e., sums to 1 over $s'$), which is not actually obvious from the definition. - It relies on the fact that $p(y|s) = \sum_{s'}p(y|s')p(s'|s)$, which follows from conditional independence of $y$ (a function of the final state) and $s$ given $s'$. That is a consequence of the Markov property in GFlowNets. A note should be made about this. - The equality may not actually hold in practice, when $p(y|-)$ is a trained classifier, so (8) may not exactly sum to 1. What do you do in this case (in the experiments)? - The results on molecule generation raise a few questions: - The reward exponent was set to $\beta=32$ or $\beta=96$, which is far larger than in past work, where it was at most 16. Why was such a choice made? This is suspicious, since convergence and mode collapse issues worsen at low temperatures. - Related, with such high exponents, one wonders about mode coverage in the learned distributions. Have you considered the in-sample diversity of the generated molecules (e.g., as measured by average Tanimoto similarity or diverse top-k metrics)? - On related work: - There is no substantial discussion of related work in the main text, even though there is a large body of work on compositional generation and classifier guidance with diffusion models (e.g., the many papers cited in the second paragraph of the introduction). - In the Appendix, the connection with [23] is discussed. The proposed method can easily turn a collection of classifiers for different objectives into a classifier for any convex combination of the objectives. It would be interesting to empirically compare this with conditioning of the model on the linear scalarization weights used in [23]. - The paper would be stronger if more explicit unifying connections were made between guidance in diffusion models and in GFlowNets. Note that diffusion models in a fixed time discretization are actually GFlowNets of a certain structure (cf. "A theory of continuous generative flow networks" [arXiv:2301.12594, ICML 2023] and "Unifying generative models with GFlowNets and beyond" [arXiv:2209.02606]). Classifier guidance using the gradient of $p_t(y|x_t)$ should be the continuous-time limit of equation (8). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see "weaknesses" above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review of our paper and thoughtful feedback. Please find our response to the raised questions below. > It would be good to explain why / state as a subclaim that (8) is a policy (i.e., sums to 1 over $s’$). Following your suggestion, in the subsequent revision of the paper we will include a note explaining why (8) is a valid policy (sums to 1 over $s’$). The preliminary text of the note including your suggested proof sketch is below: ``` Note that (8) is a valid forward policy, i.e. the distribution sums up to 1 over $s'$. This property follows from the relationship $p(y \vert s) = \sum_{s'} p(y \vert s') p_F(s' \vert s)$ which is implied by the probabilistic model: $y$ is a (stochastic) function of the terminal state $x$, $y$ and $s$ are independent given $s'$. ``` We will provide more detailed proof in the appendix. > The equality may not actually hold in practice. What do you do in this case (in the experiments)? It is true that in practice, when only an approximation of the classifier is available, the policy constructed according to eq (8) may not exactly sum to 1. In our experiments, we expressed the conditional policy in terms of log-probabilities (logits) and computed the probabilities as $\operatorname{softmax}_{s’}(\log p_F(s’ \vert s) + \log p(y \vert s’) - \log p (y \vert s))$, where the softmax operation ensures that the obtained distribution sums up to 1 over $s’$. In theory (assuming the perfectly learned classifier), the softmax operation can be replaced with simple element-wise exponentiation, but using the softmax is also correct. > The reward exponent was set to $\beta = 32$ or $\beta = 64$. Why was such a choice made? We chose to set the reward exponent $\beta$ to $32$ and $96$ due to the following reasons: * $\beta = 96$ was used in “Multi-objective GFlownets” [23] for a similar task (c.f. parameters in Table 12, Section D.4 of [23]) * composition of models concentrated on high-scoring molecules is a more challenging and application-relevant task > Have you considered the in-sample diversity of the generated molecules We evaluated samples generated from GFlowNets pre-trained with different reward exponents $\beta$, in order to assess the effect of $\beta$ on mode coverage and sample diversity. The results are in Tables R.3 and R.4 in the rebuttal PDF. The details of the evaluation and the reported metrics are described in the table captions. As expected, larger reward exponents shift the learned distributions towards high-scoring molecules (the total number of molecules with scores above the threshold increases). For ‘SA’ and ‘QED’ models we don’t observe negative effects of large $\beta$ on sample diversity and mode coverage: the average pairwise similarity of top 1000 molecules doesn’t grow as $\beta$ increases and the ratio of Tanimoto-separated modes remains high. For ‘SEH’ models we observe a gradual increase in the average pairwise similarity of the top 1000 molecules and a gradual decrease in the ratio of Tanimoto-separated modes. However, the total number of separated modes grows as $\beta$ increases, which indicates that larger reward exponents don’t lead to mode dropping. > There is no substantial discussion of related work in the main text. The paper would be stronger if more explicit unifying connections were made. We plan to utilize the additional content page to expand the discussion of the related work on compositional generation and guidance in diffusion models. In particular, the papers cited in the introduction. We also will make a note that would help a reader to better position our work in the view of unifying connections between guidance in diffusion models and GFlowNets (and continuous GFlowNets). > The proposed method can easily turn a collection of classifiers for different objectives into a classifier for any convex combination of the objectives. It would be interesting to empirically compare with conditioning of the model on the linear scalarization weights used in [23] We understand the first part of the question in the following way: “The proposed method can easily turn a collection of GFlowNets for different objectives into a GFlowNet for any convex combination of the objectives”. Please let us know if our understanding is correct. Convex combination of the rewards $R(x | w) = \sum_i w_i R_i(x) = \sum_i w_i Z_i p_i(x)$ corresponds to a mixture distribution $p(x | w) =\sum_i \left( \frac{w_i Z_i}{\sum_j w_j Z_j} p_i(x) \right)$ which can be realized by both Multi-objective GFlowNets [23] and our approach. We mainly focus on harmonic mean, contrast, and other compositions beyond mixtures. Note that in our scenario (access to forward policies of pre-trained GFlowNets), sampling from the mixture can be realized without training a classifier: it is sufficient to sample an index of the base model from a categorical distribution, and then run the forward process of the selected base GFlowNet. In our approach, we represent the mixture as an individual GFlowNet forward policy (expressed through a classifier) because at the next step, we apply classifier guidance to this policy. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the answers! I have no further questions and maintain my positive assessment of the paper. > We understand the first part of the question in the following way: “The proposed method can easily turn a collection of GFlowNets for different objectives into a GFlowNet for any convex combination of the objectives”. Please let us know if our understanding is correct. Correct, my mistake.
Summary: The paper studies the problem of composing independently trained generative processes of diffusion-based generative models and GFlowNets. The paper considers a setting where one has access to $m$ pre-trained samplers for $\{p_i(x)\}_{i=1}^m$, and the goal is to obtain a sampler which corresponds to a composition of these processes. Specifically, the authors consider two ways of composing the processes, namely harmonic mean: where the likelihood of the composition is high only where the component processes have high likelihood and contrast The authors frame this as sampling from a conditional distribution $p(x|y)$, where y is an observation which denotes the index of the process a sample is generated from. This results in a procedure analogous to classifier guidance which is popular in the diffusion literature. The authors show how the classifier guidance results in sampling from the desired composition. A critical component of this procedure is learning the classifier $p(y|s)$, for which the authors propose a MLE based procedure using trajectories from the base model. The authors first validate their approach on some synthetic tasks on a 2D grid, followed by experiments on small molecule generation with GFlowNets and colored MNIST with diffusion. Strengths: * The paper studies an interesting question - which is relevant to the community. In particular, methods to leverage pre-trained models for various downstream tasks are becoming increasingly important with the growing adoption of pre-trained models for various domains. * Since similar classifier guidance approaches have been studied extensively in the literature on diffusion models, the novelty is relatively limited. Nonetheless, there are several technical aspects of the approach such as the classifier training scheme that are novel (to the best of my knowledge). * The proposed method is relatively simple conceptually, and in terms of implementation. * The experiments are well designed, and the results are quite promising, albeit with some caveats I mention below. * The paper overall is quite well written and easy to follow. I also appreciate the authors including the code with the submission. Weaknesses: * As the authors discuss in Section 3, their theoretical analysis is analogous to classifier guidance in diffusion models. On the other hand, [1] establishes equivalence between GFlowNets and diffusion models. As a results, it seems to me that the insights provided by the theoretical analysis aren’t particularly novel even though the path to achieving them was different (which could be seen as useful on it’s own) * The experiments on diffusion are limited to a simple coloured MNIST task, with no baselines from the classifier-guided diffusion literature. * The central aspect of the approach is learning the classifiers, however there is no analysis on the classifiers - e.g. how accurate are the classifiers? what is the effect of the classifier performance on the results of the composition? How does the training of the classifier compare to simply training a GFlowNet from scratch in terms of runtime? * (Minor) A simple baseline that is missing in the experiments is training a GFlowNet with the appropriate composition from scratch. [1] Unifying Generative Models with GFlowNets and Beyond. Zhang et al. 2022. arXiv:2209.02606 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Could you please address the questions about the analysis of the effect of the classifier along with some runtime details? * What are the challenges in applying the method to more complicated tasks with diffusion models? * One natural question I had after reading the paper is that since the method encompasses GFlowNets and diffusion models, is it conceivable to be able to compose a mixture of GFlowNets and diffusion models on mixed continuous/discrete tasks? * In the experiments, the authors consider a maximum of 3 models being composed. Is there a reason to be limited to composing only 3 models? If not at least in the 2D example an experiment with more models might be helpful. Minor: * The main PDF on OpenReview appears to have some old version of the Appendix at the end. I assume that was by mistake? * In several places there is inconsistent usage of “GFLowNet” in place of “GFlowNet”. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors already discuss some limitations of the approach - effect of the classifier as well as the quality of the underlying models. I would also add limited evaluation and lack of comparison to existing approaches. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper and your insightful feedback. We have addressed the main questions and concerns you have raised below. > Classifier guidance for GFlowNets is not too novel considering the equivalence to diffusion models. This is a fair point. However, despite the known connection, classifier guidance for GFlowNets has not been proposed in prior work as far as we know. Our focus is on efficiently generating samples from compositions of iterative generative models using classifier guidance, which necessitated introducing classifier guidance for GFlowNets. > No analysis of the classifiers. How accurate are they, how does accuracy affect performance? The quality of the classifier is fundamental to the method, as the classifier guides the generative process of the mixture of base models $p_i(s)$ towards sampling from the posterior $\tilde p(s | y_1, \dots, y_n)$. If the classifier is poor, the sampling distribution will not match $\tilde p(x | y_1, \dots, y_n)$. The primary concern then is that the classifier as a function of the state $\tilde Q(y_1, \ldots, y_n \vert s)$ should be as close as possible to the ground-truth $\tilde p(y_1, \ldots, y_n \vert s)$ rather than the absolute value of the classification loss (though we do want the loss the be as low as possible). In the experiments we have considered, there is a theoretical lower bound on the classification loss as the base distributions we have considered have some overlap. We have collected additional empirical results regarding classifier accuracy and its effect on the constructed composition in the rebuttal PDF. We will also add these results to the paper. Figures R.1, R.2, and R.3 show the cross-entropy loss of the classifier for terminal and non-terminal states (eqs (10) and (12)) as a function of the number of training steps for the GFlowNet 2D grid domain, the molecular generation domain, and the Colored MNIST digits domain respectively. They show that the loss drops quickly but remains above 0. Figure R.1 further shows the distance between the composition and the ground truth as a function of the number of training steps for the classifier. This shows that the distance to the ground truth falls quickly in conjunction with the loss. > How does the training of the classifier compare to simply training a GFlowNet with the appropriate composition from scratch in terms of runtime? In addition to the new training curves in figures R.1-R.3, we have added new results regarding the runtime of classifier training in Tables R.1 and R.2 of the rebuttal PDF. These tables show the total runtime, as well as separate measurements for the time spent sampling trajectories and training the classifier. The runtime for training the classifier is of the same order of magnitude as training the base models. The main computational expense when training the classifier comes from sampling from the base models, comprising 70%-90% of the runtime, rather than training the classifier itself. In this regard there is certainly room for improvement in the training procedure, e.g. by sampling fewer trajectories, by training on individual trajectories multiple times, and by reducing the number of training steps (Figures R.1-R.3 show that loss plateaus quickly in all cases). However, we would like to stress that training the appropriate composition from scratch is far from trivial, and that our approach is more general. Specifically, in the case of GFlowNets, training the model requires access to the composite reward, which may not be available. Even if the base reward functions ($R_1, R_2$) are available, compositions can’t be expressed through the rewards only because operations on rewards are not analogous to operations on probabilities. For example, $\frac{R_1(x) R_2(x)}{R_1(x) + R_2(x)}$ is not a valid reward for the harmonic mean. In the case of diffusion models, training requires samples from the composition, which are not available before realizing the composition. In addition, once the classifier has been trained, it can be used to sample from all supported compositions of the base models, rather than a single composition as would be the case for a GFlowNet or diffusion model trained from scratch. We will make these points clear in the paper. > What are the challenges in applying the method to more complicated tasks with diffusion models? The challenges are primarily practical. More complex diffusion models are more expensive to train and sample from. More expensive sampling also makes training the classifier and sampling from the composition more expensive. More complex compositions involving more base models and more observations require either more classifiers or larger classifiers (with more outputs). > Is it conceivable to be able to compose a mixture of GFlowNets and diffusion models on mixed continuous/discrete tasks? This is certainly interesting future work. For the current method, this is not possible as the base models must have a shared domain. However, composing diffusion models with GFlowNets with continuous domain, or GFlowNets with discrete diffusion models is certainly possible. > Is there a reason to be limited to composing only 3 models? No; there is no theoretical limit on the number of base models. Generally, we felt that 3 base models is a reasonably realistic setting. > The main PDF on OpenReview appears to have some old version of the Appendix at the end. I assume that was by mistake? Yes, this was an oversight on our part. We apologize for any confusion this may have caused. The correct appendix can be found in the supplement. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your response and apologies for the delayed response! > However, despite the known connection, classifier guidance for GFlowNets has not been proposed in prior work as far as we know. I agree, however I believe it would useful to highlight this connection in the paper. > Figures R.1, R.2, and R.3 show the cross-entropy loss of the classifier Thanks for these additional experiments. As you pointed out later in your rebuttal - there doesn't seem to be much improvement after a few hundred steps which is a bit surprising. Additionally, this makes it hard to (empirically) understand the effect of the classifier performance since we there isn't much range captured by the experiment. > Efficiency of training the classifiers Thanks for these details and the clarification. The training cost of the classifier is similar to training the GFlowNet from scratch. This is something which should be clarified in the paper, along with the computational challenges of using it with larger and more sophisticated diffusion models. I appreciate the authors response which answered most of my questions. I still believe the weaknesses remain, but the paper is strong enough for acceptance.
Summary: The paper proposes a method to compose multiple iterative generative models, i.e., either multiple GFlowNets or multiple diffusion models. The idea starts out with a mixture model over the generative models. Then, one can construct a categorical distribution over the generative models that tells us which model a sample originated from. By adapting classifier guidance to GFlowNets, the proposed method can compose multiple models in a way that allows both emphasizing or de-emphasizing specific models by treating the different generative models as different classes. On a diverse set of (toyish) experiments the method is shown to be effective for both GFlowNets and diffusion models. Strengths: The method is very interesting. In particular, the part where the question "which model was this sampled from" is treated as a classification task for the purpose of compositional generation. The paper addresses a very important problem with high impact. The presentation is easy to follow and the text is well-written. Weaknesses: The experiments clearly demonstrate the effectiveness and versatility of the proposed method. Though, the experiments are limited to toyish settings and I believe more complicated settings would greatly enhance the impact of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: All state-of-the-art diffusion models for images allow for text conditioning. Conditioning on different texts can be viewed as creating multiple different generative models from the viewpoint of this paper. I believe this is a more realistic setting than assuming that one would want to combine multiple diffusion models that were trained entirely separately from one another. It would be interesting to see some evaluations of this setting. A cheap strategy to obtain a classifier for this could be to fine-tune (part of) the CLIP model. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The main limitation is inherited from classifier guidance: the need to train a classifier on intermediate states. This is already mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The experiments clearly demonstrate the effectiveness and versatility of the proposed method. Thank you for the positive feedback and insightful comments! > The experiments are limited to toyish settings and I believe more complicated settings would greatly enhance the impact of this work. We naturally agree. Model composition with diffusion models and GFlowNets is still new. Our goal for this work was to formulate the problem and motivate further empirical study through our illustrative examples. We consider larger scale problems, such as composing state of the art image generation models to be a natural next step. > Conditioning on different texts can be viewed as creating multiple different generative models. It would be interesting to see some evaluations of this setting. We did consider using conditioned models to emulate multiple distributions. One particularly interesting application is in safety, where one could compose safety or moral constraints on an existing text-conditioned generative model to remove harmful content. That said, we believe studies like this warrant separate comprehensive evaluation. In the current work, we focus on the development of a new approach to model composition, its theoretical foundation, and empirical validation. --- Rebuttal 2: Comment: Thanks to the authors for their response. I believe the points raised by reviewer eEwT and Ean9 on the related work (classifier guidance, compositional generation, etc.) are important to properly highlight in the paper. If these points are addressed I vote for acceptance.
Rebuttal 1: Rebuttal: We thank all reviewers for the time and effort dedicated to review of our work and for the helpful and constructive feedback. ## Motivation and focus of the paper Our work is motivated by the growing costs of general-purpose pre-training of generative models as well as the need for model reuse and control of the generation post-training. The problem of model composition is still new and the iterative nature of the generation processes in diffusion models and GFlowNets necessitates special methods. We have developed a formal approach where composition is defined as meaningful mathematical operations on a set of base probability distributions. These compositions are highly controllable, allowing us to emphasize or de-emphasize regions in the composition where specific base distributions have high density. We assume that we have access to a number of GFlowNets or diffusion models that generate samples from these base distributions, and provide a method to construct processes that generate samples from the composite distributions. Compared to training GFlowNets or diffusion models from scratch to reproduce these compositions, which is generally impractical or impossible, our method is both practical and more flexible, as after training it can generate samples from all supported compositions. Further, we derived generalized variants of the theoretical results on diffusion mixture and guidance for the case of GFlowNets. Following theoretical justification, we empirically validate the approach in a range of experimental settings including a practically-relevant molecule generation task. We believe that our work will motivate and support future research on principled approaches to generative model composition with potential for scale. ## Additional experimental results We have collected additional empirical results to support our response to reviewers’ comments. The new figures and tables are in the rebuttal PDF document (attached to this comment). We list the new results below, and discuss them in detail within the individual responses to reviewers. ### Classifier training time. Analysis of classifier and learned distributions. Following the suggestions made by reviewers UsMN and eEwT, we provide classifier training curves (Figures R.1, R.2, R.3) and a summary of the training time (Tables R.1, R.2). Figure R.1 also shows the distance between the learned compositions (obtained via our method) and ground-truth compositions in the 2D GFlowNet domain. ### Effect of reward exponent $\beta$ on the GFlowNet models: mode coverage and diversity Following reviewer Eean9’s suggestions, we have evaluated the sample diversity of the base GFlowNets in the molecule generation domain at different reward exponents $\beta. The results are shown in Tables R.3 and R.4. ## Clarity improvement and subsequent revision Based on the feedback from the reviewers, we will expand the paper and incorporate a number of clarifications. We list the most important changes below: - We will add the additional experimental results (presented in the rebuttal PDF) and the corresponding discussion. - Following reviewer UsMn’s suggestion, we will elaborate on the details of the derivation of the non-terminal state classification loss (12), as well as computational complexity and numerical stability. - Following reviewer ZgyF’s suggestion, we will extend the discussion of the application of the method for controllable text generation and image generation via text-diffusion models. - Following reviewer ZgyF’s suggestion, we will discuss the scope of the paper and clarify the notion of compositionality in the context of our work as well as the relation to other forms of compositionality. - Following reviewer VKR5’s suggestion, we will extend the discussion of the application of the method for the composition of models obtained by conditioning on different text prompts in the context of text-conditioned diffusion models. - Following the suggestions of reviewers eEwT and Ean9, we will expand the discussion of the related work on composition and guidance in generative models as well as unifying connections between GFlowNets and diffusion models. - Following reviewer Ean9’s suggestion, we will add a subclaim explaining why the classifier-guided policy (8) is valid (sums up to 1 over $s’$) and explain the details of the practical implementation of the policy. - Following reviewer m7vZ’s suggestion, we will add “Broader impact” section. Pdf: /pdf/4f01334c632c79ba8519e798e8dcbc694197e5d0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The current paper focuses on the challenge of composition generation from pretrained generative models, with a specific focus on GFlowNets and Diffusion models. In comparison to prior literature, two novel compositionality operations are introduced for generating samples that are simultaneously likely according to two generative models or likely per a subset and unlikely per the remaining models. This is a strict generalization of operations introduced in prior work on composition of energy-based models. Practically, the operations are instantiated via a framework motivated by classifier guidance in diffusion models. Experiments are conducted on a molecule generation application and a colored MNIST problem. Rebuttals acknowledgment: I had a good view of the paper before rebuttals and the authors' response to my questions was fair. I continue to keep my score accordingly. Strengths: I really like this work! The contributions are simple and straightforward, but very interesting. The formalization, generalized operations, and relation to classifier guidance were exciting to read through. The authors also appropriately acknowledge the limitations of their work, specifically the need for sufficiently strong component models. Weaknesses: - My biggest apprehension is limited experimental investigation, which would raise the quality of this paper quite strongly in my opinion. I do not hold this to be a strong weakness though. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As noted in the introduction, use of large scale generative models has been seen in several tasks in the foundation model era of machine learning. Given the limited empirical evaluation, I would like to gather the authors' thoughts on use of their frameworks for, say, controllable generation of language via LLMs or controllable generation of images via text-diffusion models (e.g., what would the model show for known compositionality failures of Dall-e and related methods? See [1]). [1] Conwell and Ullman, 2022. https://arxiv.org/abs/2208.00005. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - A clear note of what is meant by compositionality would help in this work, since the term is extremely overloaded. The experiments currently reported focus on what would be called systematicity or systematic generalization in my opinion (see Hupkes et al., "Compositionality decomposed"), but other valuable forms of compositionality, e.g., productive generalization, will arguably require some "chaining" operator that allows prior generated states to be fed into the model for generating the next state. Since the work focuses on GFlowNets and diffusion models, where a notion of sequential generation of intermediate states is present, arguably authors can use their defined operations to perform productive generalization as well? I would appreciate if the scope of this paper is clearly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We address specific questions below: > I would like to gather the authors' thoughts on use of their frameworks for, say, controllable generation of language via LLMs or controllable generation of images via text-diffusion models (e.g., what would the model show for known compositionality failures of Dall-e and related methods? Thanks for recommending the Conwell *et.al.* paper! The CLIP training objective treats language prompts as a "bags-of-words" for compute reasons. Therefore, it is not surprising that the image generated by DALL-E 2 (and family) lack relational understanding. This type of constraint is the norm, not the exception in large model training, which is why we believe finding better ways to model relationships is an important area to study. The type of relationships mentioned by Conwell *et.al.* are often mappable to spatial arrangements. It is not difficult to envision learning multiple base models that each model specific relationships. Complex relational queries could then be represented as a composition of appropriate base models, and samples capturing these relations could be generated using the method we have proposed here. This is similar to prior work [27] which used multiple EBMs to model a number of relationships, and found that samples from appropriate compositions of these EBMs reproduced the target relationships significantly more faithfully than StyleGAN2 conditioned on a textual encoding of the relationships. We leave this to follow-up works, as it warrants comprehensive evaluation. > A clear note of what is meant by compositionality would help. I would appreciate if the scope of this paper is clearly discussed. Since submission we have worked hard to further clarify our exposition on compositional sculpting and our method. We have clarified that we focus on a narrow but well-defined type of composition where we look to algebraically combine (compose) probability densities in a controllable fashion, such that we can emphasize or de-emphasize regions in the composition where specific base distributions have high density. The harmonic mean and contrast operations we highlight in the paper are specific instance of this. Our paper focuses on a setting where we have access to GFlowNets or diffusion models which can generate samples from those probability distributions we wish to compose. The iterative nature of GFlowNets and diffusion models is preserved when composed using our method. Thus, if the base models exhibit productive generalization, so will their compositions. In addition, we would like to highlight that compositions themselves can be chained as well. As the compositions correspond to valid GFlowNets or diffusion models, one can compose these compositions with other GFlowNets or diffusion models. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I'll keep my score as is.
Summary: This paper introduces a method to combine sequential generative models, in this case GFlowNets, so as to create new distributions from base models. This is done by training classifiers that are then used to guide sampling. The method is tested on a simple grid and a molecular domain (emulating the problem of the paper that introduced GFlowNet), as well as on MNIST with diffusion. Strengths: The paper is moderately easy to read, although some of its results were not immediately clear to me so I spent quite some time doodling on paper to convince myself that the propositions were reasonable. What the paper proposes and seems to be able to achieve empirically is very interesting. Combining generative models in the ways shown here could be an amazing multiplier of large pretrained models. Weaknesses: Generally the paper is not making a good job of convincing the readers that the proposed method should work at the theory level, and that the effort of combining distributions by training a classifier is worth it (compared to retraining a generative model). Technical Quality: 3 good Clarity: 3 good Questions for Authors: l211 and around, I’m not sure I understand the move to approximate $p(y_i|s)$ with $p(y_i|x)$, even when $s$ and $x$ are related by $\tau =(s_0,…,s,…x)$ a valid trajectory. How or when are these two quantities interchangeable? The objective in 12 involves $w_i$, but the paragraph following (11) seems to suggest that $Q(y_2, .., y_n|x)$ can be replaced by the $w_i$s. So I’m confused whether (12) uses the loss described in (11) or a modified version of it. Another worrying aspect of eq (12) is that there’s an $O(nm)$ sum, which sounds like it can get expensive, and there’s a product of $(n-1)$ probabilities, which sounds like it can get awfully numerically unstable. I’m surprised the authors have been able to train models at all. Are there any tricks involved? The appendix is incomplete. In fact, part of the proposed contribution of the paper is to provide a combination method for diffusion methods. This is never quite explained properly; readers are directed to appendix D which is empty. In addition, although the propositions and theorems are analogs of past work, it was quite surprising to find theorems with no proofs in a paper. Even if the proof is almost identical to prior work, reproducing it with proper credit seems like the least thing to do; in this case there are nuances with the GFlowNet framework that are left unexplained. The authors already highlight this limitation, but this seems like something quite fundamental that is for some reason not reported: **How expensive is it to train the classifier**? If it’s just as expensive as training a new generative model, then there’s little incentive to use the proposed method. I’d like not to have to take the authors’ word for it and instead see empirical evidence, training curves, wall time, and so on. Another missing result is some validation that the learned distributions are the expected ones (e.g. a plot showing that the JS divergence goes to 0 with more training/capacity) at least on a toy setting like the grid environment. I really appreciate what the paper is trying to accomplish, and the empirical results seem very nice (although lacking some crucial results). I’m not sure though that the proposed method is correct, and with no proofs or extra details of why this should work I’m really inclined to reject this paper. Happy to engage in conversation with the authors of course. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed some of the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and feedback on our paper. > The appendix is incomplete The main-text PDF included an incomplete draft of the appendix by mistake. We apologize for the accidentally caused confusion. The complete appendix is provided in the supplementary zip-archive (can be downloaded from the openreview page). The proofs of all theoretical results as well as the formulation of the approach for diffusion models are in the supplement. > How expensive is it to train the classifier? Following your suggestion, we empirically evaluated classifier training time and learning curves. The results are shown in Figures R.1, R.2, R.3, and Tables R.1, R.2 in the rebuttal PDF. The classifier training time is comparable to the base generative model training time. However, most of the classifier training time (more than 70%, or even 90%) was spent on sampling trajectories from the base generative models. Our implementation of the training could be improved in this regard, e.g. by sampling a smaller set of trajectories once and re-using trajectories for training and by reducing the number of training steps (the loss curves show that classification losses plateau quickly). We would like to note that the composite distributions can not be realized by training new generative models for the target composition directly. For diffusion models, training requires data from the target distribution, which is not available without realizing the composition first. For GFlowNets, training the model requires access to the composite reward (which might not be available). Even if the base reward functions ($R_1,R_2$) are available, the compositions (i.e. harmonic mean) can’t be expressed through the rewards only, i.e. $\frac{R_1(x)R_2(x)}{R_1(x)+R_2(x)}$ is not a valid reward for the harmonic mean, since rewards are unnormalized. We also note, that after the classifier is trained once, it can be used to construct multiple compositions of the base distributions. > Validation that learned distributions are the expected ones Figure R.1 (right) in the rebuttal PDF, shows the evolution of the distances between learned compositions (realized via classifier) and the ground-truth composition distributions (in 2D grid domain). For all compositions, as the classifier training progresses, the distance to the ground truth distribution decreases. Compared to the distance at initialization we observe almost an order of magnitude distance reduction by the end of the training. > l211 and around, I’m not sure I understand the move to approximate $p(y_i| s)$ with $p(y_i|x)$. How or when are these two quantities interchangeable? We would like to clarify that we do not propose to approximate $p(y_i|s)$ with $p(y_i|x)$. The objective in eq. (12) uses $\ell(\hat\tau,\hat y_1,\ldots,\hat y_n;\phi)$. The value of $\ell$ appearing in (12) is exactly the same as given in eq. (11). We arrive at eq. (12) by combining several ideas. 1. Our goal is to train a classifier $\tilde Q(y_1,\dots,y_n|s)$. This classifier can be obtained as the optimal solution of $\min\limits_\phi \operatorname*{\mathbb{E}}\limits_{\hat\tau,\hat y_1,\dots,\hat y_n\sim\tilde p(\tau,y_1,\dots,y_n)}\ell(\hat\tau,\hat y_1,\dots,\hat y_n;\phi),$ where $\ell$ is defined in eq. (11). We can obtain an unbiased estimate of the loss (and its gradient) by sampling $(\hat\tau,\hat y_1,\dots,\hat y_n)$ and evaluating (11) directly. The challenge is not in computing (11), but in the expectation over $(\tau,y_1,\dots,y_n)$. The steps described in the paragraphs following (11) were introduced to obtain an estimate of this expectation. 2. The expectation above can be expressed as $\operatorname*{\mathbb{E}}\limits_{\hat\tau, \hat y_1\sim\tilde p(\tau,y_1)}\left[\sum\limits\_{\hat y_2=1}^m\dots\sum\limits\_{\hat y_n=1}^m \left(\prod\limits\_{i=2}^n \tilde p(y_i=\hat y_i|x=\hat x)\right)\ell(\hat\tau,\hat y_1,\dots,\hat y_n;\phi)\right],$ where we re-wrote the expectation over $(y_2,\dots,y_n) | \tau$ in the from “expectation” = sum(“probability” * “value”). The expectation over $(\tau,y_1)$ can be estimated by sampling pairs $(\hat\tau,\hat y_1)$ as described in the paragraph after eq. (11). The only missing part is the probabilities $\tilde p(y_i=\hat y_i|x=\hat x)$ which are not directly available. 3. Our proposal is to approximate these probabilities as $\tilde p(y_1=j|x=\hat x)\approx w_j(\hat x;\phi)=\tilde Q_\phi(y_1=j|x=\hat x)$. The idea here is that the terminal state classifier $\tilde Q_\phi(y_1|x)$, when trained to optimality, produces outputs exactly equal to the probabilities $\tilde p(y_1|x)$. 4. Steps 1-3, give a procedure where the computation of the non-terminal state classification loss requires access to the terminal state classifier. As we described in the paragraph preceding eq. (12), we propose to train non-terminal and terminal classifiers simultaneously and introduce “target network” parameters. The weights $w$ are computed by the target network $\tilde Q_{\bar\phi}$. > Another worrying aspect of eq (12) is that there’s an $O(nm)$ sum, which sounds like it can get expensive In our experiments $n$ and $m$ were at most $3$, and the computational cost of summation over $\hat y_2,\dots,\hat y_n$ was small. In general, one could trade off estimation accuracy for improved speed by replacing the summation with Monte-Carlo estimation. In this case, the values $\hat y$ are sampled from the categorical distributions $Q_{\bar \phi}(y|x)$. Note that labels can be sampled in parallel since $y_i$ are independent given $x$. > There’s a product of $(n-1)$ probabilities, which sounds like it can get awfully numerically unstable. I’m surprised the authors have been able to train models at all. Are there any tricks involved? We only employed the standard techniques for improving the numerical stability of operations on probabilities (re-parameterization in log-probabilities) and did not observe any numerical instability issues. --- Rebuttal Comment 1.1: Comment: Thanks for all the precisions. I'm still having a hard time internalising why this works, but unfortunately do not have the time to dig into it. I will raise my score since you've addressed my concerns. > The classifier training time is comparable to the base generative model training time. It does seem like there's room for improvement here, but the loss does seem to plateau pretty fast on the classifier. Maybe this is somewhere where scale will make the gap clearer (intuitively it should be harder to train the generator, but science is all about beating intuitions so...) > re-parameterization in log-probabilities That makes sense; maybe worth mentioning (this apparently is a pretty central trick to making GFlowNets work as well).
null
null
null
null
Hyper-HMM: aligning human brains and semantic features in a common latent event space
Accept (poster)
Summary: The authors propose an HMM-based model, Hyper-HMM, for characterizing variability in temporal and spatial dimensions in fMRI sequence datasets. The model is a chain-structured HMM where each discrete state (event) defines a relationship between neural activity and a stimulus embedding. Importantly, the discrete state defines the mean of the subject's neural activity projected to a lower-dimensional space. Each subject has a different lower-dimensional projection matrix, while the events and stimulus embeddings are shared across subjects. These features allow for modeling spatial variability across subjects (via different projection matrices) and different temporal alignments (discrete estimation separate for each subject). The stimulus embeddings allow for identifying shared structure. The model is validated in a simulated experiment and on a dataset of fMRI recordings while subjects listened to computer science lectures. The authors examine recovery of latent states and clusterings of neural activity/stimuli in the simulated data. In the analysis of an fMRI dataset, the authors find variations in sequential activity & spatial coding across subjects. Importantly, they show the learned projections output statistically meaningful clusterings on heldout fMRI runs. Strengths: The proposed methods are original and significant for the analysis of fMRI datasets during sequential tasks across subjects. The idea to also embed the stimulus makes the learned latent state clusterings more interpretable, and may also help with identifiability. The experimental results relating semantic content of course videos to fMRI recordings across subjects appear very significant and useful for scientific analysis. Weaknesses: The clarity of the methods and experiment could be improved. One example is it is unclear how the stimulus embedding is learned. At some points, the text implies that the stimulus is treated akin to a subject where a projection matrix is learned for the stimulus in the same way it is for the subjects. However, the stimulus is not included in Algorithm 1. Including more details on how the stimulus is incorporated into the model and learned would help with clarity. Next, some aspects of the modeling approach appear inconsistent which may limit significance. For example, the events $E$ are defined in voxel space whereas the event segmentation is done in the lower dimensional projection space defined by $W_i$. Next, the model fitting approach appears somewhat ad hoc, and it is not clear it corresponds to a single objective such as maximum likelihood estimation. More justification for the proposed fitting approach and simulated data analysis could rectify these concerns. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Can the authors clarify how the stimulus embeddings are learned? * Why are the events $E$ defined in voxel space when the event segmentation is done in the lower dimensional projection space defined by $W_i$? How does this compare to computing the events $E$ based on projected activity $W_i X_i$? I encourage the authors to consider an alternative fitting method that learns $G$ via the projected neural activity $W_i X_i$. * How sensitive is the model to initialization / how do results vary across different initializations? Minor comments - typos * Figure 2 caption: stimulated -> simulated Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1) Yes, the stimulus embeddings were treated like a subject when fitting the model (as you’ve already noted). We include one copy of the stimulus embeddings per each subject in order to prevent the model from over-favoring the human subjects during the forward-backward step. In Algorithm 1, we would adjust the number of subjects, n, to reflect this procedure. Note that because our approach does not require the number of voxels/features to be consistent across subjects/models, the fact that models are in a different feature space does not require any changes to the algorithm. 2) Events at the group level (G) are all defined in the low-dimensional space, and the HMM fitting takes place in this low-dimensional space. The event matrices E (in the original voxel/feature spaces) are computed and used only when updating the projection matrices W. The HMM update procedures, following the expectation-maximization algorithm (Baum-Welch), in combination with PCA updates to the projection matrices, maximizes the likelihood of data under the assumption that voxels have uniform error variance. https://www.sciencedirect.com/science/article/pii/S0169743998000902 3) There is no randomness in initialization - the HMM fitting starts by assuming very high measurement variance, which effectively starts the estimation procedure at the model's prior distribution. --- Rebuttal Comment 1.1: Title: Response to comment Comment: Thank you to the authors for their response. Upon consideration my primary concerns are unchanged, as detailed in the comments below. 2) I understand that PCA corresponds to maximum likelihood estimation in a certain generative model, and that EM in an HMM also finds local maxima in the log likelihood. However, it is not necessarily the case that the proposed combination of those steps here ascends a single log likelihood objective. For example, using PCA to determine $G$ implies that $G$ is a latent variable that should be marginalized over to determine the overall marginal likelihood (just as the low-dimensional latent variables from PCA as integrated out to compute the marginal likelihood in PCA). I think connecting the steps of the fitting procedure to an overall objective is important to understand what the algorithm is optimizing, but the connection remains unclear at this point. 3) In algorithm 1 it says $W_i \leftarrow N_{D \times V_i}(0, 1) \forall i \text{ in } 1...n$. Does that mean the matrices W are initialized as random samples with each element IID Normal with 0 mean and 1 variance? If so, then the initialization is not deterministic, and it is helpful to know how the model and performance vary across runs. If that means something else, I suggest that the authors clarify that line. --- Reply to Comment 1.1.1: Comment: 2. We apologize for our lack of clarity on this point. The overall log-likelihood being optimized is $p(X | W, G)$, i.e. the likelihood of the data given the transform and event pattern parameters. The standard Baum-Welch procedure alternates between an Expectation step in which we estimate the probabilistic latent state assignments for each timepoint (based on the current parameters), and a Maximization step in which we re-calculate the parameters based on the latent state estimates. The Expectation step in our model (computing the η variables in Algorithm 1) is unchanged from the standard procedure, since this takes place entirely within the low-dimensional space with W and G held constant. For the Maximization step, we seek to maximize the η-weighted log probability of our observation model as a function of the parameters W, G. As described in the paper (eqn 1 and the following paragraph), the observation log probability is proportional to the correlation between the projected data and the group-level event patterns, so we seek to maximize $\sum_i \sum_t \sum_e η_{i, t, e} corr(W_i X_{i,t}, G_e)$ which is equivalent, up to a scale factor, to: $\sum_i \sum_e corr(W_i E_{i,e}, G_e)$ This can be thought of as a canonical correlation analysis, in which we seek to find the projections W and G that maximize the correlation between the events E and the Identity matrix. Using the standard solution to CCA, the optimal G is composed of the eigenvectors of the covariance matrix of E (averaged across subjects, since here we seek a shared G that fits all subjects simultaneously). This is equivalent to running PCA on the stacked event representations E as in our model, since each event representation is separately z-scored. Finally, the CCA solution for W is equivalent to linear regression of E onto G - our model performs this step via ridge regression to regularize the matrices W_i (since these have number of columns = number of voxels in a region). 3. The reviewer is correct that there is technically a random initialization of the transform matrices, but this description of the Algorithm was misleading on our part. During the very first pass of fitting, the initial event patterns G are all set to the same pattern (the mean of the projected data), and so the projected data plays no role; the state estimates η produced by the forward-backward algorithm are driven solely by the model priors and not the projected data (since each timepoint matches all events equally well). The Ws were randomly initialized only to allow the forward-backward algorithm to run, and are meaningfully set for the first time at the end of the first loop. A more straightforward description of Algorithm 1 would be to say that the fitting process begins with η (set to the model priors), not with G and the Ws.
Summary: This paper develops Hyper-HMM as a hybrid model that simultaneously aligns both temporal and spatial features across fMRI datasets. The proposed model learns a linearly project that maps voxels to a low dimensional latent space, in which timecourses are segmented into corresponding temporal events. The purpose of this model is to remove the effect of each individual’s mental trajectory through an event sequence, and to also align with other feature spaces like stimulus content. Overall, it is an interesting paper, however, there are several concerns about the machine learning novelty, validating the empirical studies, and the clear presentation of the proposed method. Strengths: Please refer to the question section Weaknesses: Please refer to the question section Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The followings are the major concerns and minor comments: 1) It is still unclear to me as to what the novelty of this paper is in terms of machine learning. There may be some contributions to computational neuroscience in this paper; however, which part of the machine learning approach is new in this paper? The author(s) of the proposed paper should specify whether there is anything theoretically new for the proposed approach or if this is merely an application paper for HMM techniques. 2) My second concern is the design of the empirical studies. Even though the study contains several beautiful figures, more numerical analyses would be helpful to convince the reader that the proposed method is effective. There is a lack of regular analysis and conventional machine learning metrics (such as accuracy, dice, etc.) which make it difficult to understand the results. 3) The proposed method should be benchmarked in comparison with related state-of-the-art techniques. 4) In this paper, the notations are confusing. In regular papers, scalers are denoted by small letters, vectors are defined with small letters (highlighted in bold), and matrices are denoted by capital letters using bold. In this paper, there are a lot of conflicts. It is so hard to trace what is a set, a matrix, or even a distribution. 5) There are some minor linguistic and typo problems in this paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to the question section Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1) Our model incorporates spatial alignment within the Baum-Welch update procedure for the temporal HMM, allowing for simultaneous spatial and temporal alignment. Although the components of this model are indeed taken from previous work, the combined model and its application to fMRI data are novel. We apply our approach to a dataset meant to maximally capture substantial variance across individuals learning and remembering complex pieces of information throughout an extended period of time (an entire semester), which has not been thoroughly examined in cognitive neuroscience/psychological sciences due to the methodological constraints we address in this paper. 2) Unfortunately, our choices in evaluating the model deviated from conventional machine learning metrics. Because of the inherently difficult nature of obtaining “ground truth” labels of an individual’s true cognitive state on a second-by-second basis, metrics from supervised learning are not applicable here. We use conventional machine learning metrics such as R2 to test how well projections generalize across runs. 3) As of yet, there are not any suitable and related state-of-the art techniques against which we can benchmark our current model. Although there are existing spatial alignment, temporal alignment, and brain-stimulus encoding models available, none of these models have explored all three (spatial, temporal, stimulus feature) alignments simultaneously. This prevents us from applying them in cases when both temporal and feature dimensions are not aligned, such as for mapping between fMRI data (timepoints x voxels) and semantic models (sentences x features). 4) Our notation in the main text uses capital letters for matrices (such as the weight matrix, data matrix, etc.) and lowercase letters for scalars, with the exception of the scalar dimensionality D. 5) We have identified a couple typos in the manuscript after the submission deadline; we hope that none of these caused confusion in understanding the paper. --- Rebuttal 2: Comment: I have read all the comments and the corresponding responses. I am not satisfied with the responses regarding my concerns and believe that this paper needs an additional revision stage to be ready for the publication process. I will keep my score as it is, however, I am open to other opinions as well.
Summary: The authors develop a method to identify and align events in the brain and external stimulus. They iteratively fit a Hidden Markov Model to find the times of events, and spatial characteristics of those events. Strengths: Originality: This work is a minor update to past work, by accounting for time shifting as well. Quality: The authors use careful validation with simulated data, and careful cross-validation in real data. Clarity: The paper is largely well written. Significance: The main problem of comparing neural activity across subjects and referencing those patterns to interpretable stimulus-driven semantics is an important one, especially as the amount of data in the field continues to grow. Weaknesses: The events in this paper refer to extended time periods. Events are often described as instants in time, rather than extended periods. It would be helpful to carefully articulate the definition of events to prevent misunderstandings. The authors make quite strong assumptions, despite protestations that their method is very general. In particular, I am skeptical about the "constant events" in the time course, and about the linear embedding of the semantic content. The authors only test out a narrow range of timing differences (25%), which leads me toward greater skepticism about the generalizality of the author's results. Suggestion: It would be useful to test this method in a simulation with a deliberately time-warped movie, to check whether the method can recover simulated data can recover the true timercourse. Minor: Figure 2 caption has a typo: should read "simulated data", not "stimulated data". Technical Quality: 3 good Clarity: 3 good Questions for Authors: L235: "resulting in latent event representations that are largely orthogonal, which will fail to capture semantically-meaningful relationships and will fail to generalize to new events.” I don’t understand. Orthogonal components in event relationships should be common and natural. Why is orthogonality considered as meritorious? Three dimensions is shockingly low dimensionality of semantic space. This seems to me like analyzing image data and find the dominant principal component, the sky. Linear projections onto semantic space is highly restrictive and unrealistic. But it could be a reasonable lower bound on alignment. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do an excellent job of articulating their problem and the essence of their solution.. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1) Here we rely on a definition of events, and event segmentation, widely used in psychology and cognitive neuroscience. Continuous streams of information, as is the case in videos or text, can be divided into smaller and smaller chunks (e.g. a book consists of chapters, chapters are composed of paragraphs, paragraphs of sentences, sentences of words, so on and so forth). Psychologists use this definition to study the ways in which people organize meaningful chunks of information (Yates, Sherman, & Yousif, 2023; Zacks et al., 2007), and have found that this may be the foundation in remembering contextually- or semantically-linked information (Ezzyat & Davachi, 2011). With respect to temporally-constrained information, such as the class videos we use in our paper, an event consisting of semantically related adjacent timepoints is an extended period of time. 2) Our simplifying assumptions are driven by the limitations of fMRI datasets, which have high levels of noise and generally consist of only ~1000 timepoint samples per subject. The “constant events'' we refer to in our paper depend on event stability, or a spatial (voxel) pattern in the brain that appears across adjacent time points. This pattern stability is viewed as the neural instantiation of a stable event representation (Antony, 2021; Baldassano et al., 2017; 2018). Brain representations are of course never fully static, but we can define events at the shortest timescale for which there are meaningful dynamics that generalize across subjects.The use of linear embeddings of the semantic content is a form of learning linear mappings from neuroimaging data to semantic spaces, a standard practice in psychology/cognitive neuroscience, such as when creating encoding models of the brain (Naselaris, 2011). 3) The simulations were intended only to show proof-of-concept behavior for the model, demonstrating that the architecture and fitting procedure is able to capture varying temporal onset/offset of events and topographical differences across individuals while applying increasingly high spatial noise. Overall, simulating realistic fMRI data is an ongoing challenge, given the highly complex spatial and temporal correlations present in a real dataset. We hope to have access to simulated fMRI datasets that are more representative of real data sometime in the future. Q1) Orthogonality across events would be indicative of events being equally distinct from each other, thus producing no meaningful event similarity structure shared across subjects. We believe that one strength of this model is the ability to learn a shared event space while preserving individual nuances. This aspect would be lost if the model were tuned to individuals without any regard to the shared event space. Q2) Given that the model is fit to data from single regions at a time, the low number of dimensions is consistent with pre-existing literature, such as seen with a commonly accepted fMRI dimensionality reduction algorithm, the Shared Response Model (SRM) <https://papers.nips.cc/paper_files/paper/2015/file/b3967a0e938dc2a6340e258630febd5a-Paper.pdf>. Q3) While linear projections onto a semantic space can be limiting at times, fMRI data itself is limited per subject. Our training and test datasets contained roughly 1000 timepoints on average, requiring the use of low-complexity models to accommodate the nature of the dataset. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Dear Authors, Thank you for answering the points that were raised, your response will be taken into account. Best, Your Area Chair
Summary: UPDATE: I have raised my score and now support acceptance of this paper. I believe it will be a good contribution to the conference. ----------------------------- The authors propose an extension for an HMM model proposed by Baldassano and colleagues to align interindividually different brain responses in both space and time to analyze naturalistic stimulation settings. They test their approach through simulations and on an empirical dataset of 19 participants, who watched a series of 5 computer science lectures. They show that their model is robust to a number of noise levels in the simulation analysis. In the empirical analysis, the model appears to learn a meaningful latent space (concrete vs abstract and future-oriented vs present tense descriptions) and outperforms a null model. Strengths: - The research problem addressed is an interesting and timely problem, namely how to compare spatially and temporally heterogeneous responses across participants - The paper is largely well-written Weaknesses: - The sample size is small - The approach is not compared to other state-of-the-art models, rather the authors compare it to a null model, which is poorly explained - The authors do not provide analysis code or sufficient detail to reproduce their results Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall, this work tackles an interesting problem and shows some first promising results, but I believe that it would require quite some work like more extensive simulations, comparisons to other methods and testing in a larger dataset or a second dataset (to demonstrate that this method is not only useful for applications to naturalistic stimulus analysis, but other areas of neuroscience research, for example aligning multiple imaging modalities) to meet the NeurIPS standard. Nonetheless, I really enjoyed reading the paper and I strongly encourage the authors to continue this interesting work. Here are some questions/comments that would allow me to increase my score. I hope you find them helpful and constructive. **Major points** - The actual model could be explained in greater detail. It would be helpful to include pseudo-code for the algorithm and the cost function explicitly in the manuscript. - Please, explain (at least in the supplement) the choice of preprocessing, e.g. quality control (motion detection/scrubbing, motion artifact rejection thresholds), software versions, scanner/coil details, filtering, slice-time alignment if conducted etc. - The choice of the simulation parameters is not clearly motivated, can you explain why you chose to simulate datasets with only 6 participants or only 5 voxels, this does not correspond well to empirical situations, where the sample would be larger, and #voxels >> #stimuli, which would be important to test whether your reweighting (p.4 125-126) works. It would also be helpful to express the noise as SNR to assess whether your method assumes realistic noise. - For the empirical analysis, it is unclear how you construct your baseline. Please, be more specific about how you computed that, otherwise it is impossible to assess the model performance - Related to that Figure 5 is not very clear, I am assuming that the black dot and the black diamond in Fig 5 reflect the actual value, whereas the grey shaded area is the null-distribution? This should be clearly stated in the figure captions. - Figure 5: It is quite striking that the fMRI stimulus match is much worse on the empirical data than the clustering. Based on the simulations (Figure 2), this is unexpected. Could you elaborate on what the reasons for this could be? - Figure 5: The fMRI-stimulus match null-distribution(?) is centered on a negative R^2. Can you explain why this happens? Could this be suggesting that the baseline model is not appropriate? - Will code be provided to ensure reproducibility of the analyses? - p. 6 ll. 218-221: You state that the latent dimensions appear to map onto semantically meaningful dimensions (concrete vs abstract and future-oriented vs present tense descriptions). Is this in line with the literature? Further discussion of this would be helpful. - p. 9 l. 268-270: “Our experiments demonstrate that [...] this model […] can learn meaningful mappings” To show that these mappings are meaningful, it would be useful to show that the latent scores correlate with an external measure that was not shown to the model, but would be hypothesized to relate to the captured dimensions (e.g., IQ, exam scores, abstract thinking scores or something of that sort). Is any information like this available. That would strengthen your claims. **Minor points** - Could you elaborate on the relationship between your method and representational similarity analysis (https://doi.org/10.3389/neuro.06.004.2008)? You very briefly touch on related work (e.g. from Haxby), but I think this could be expanded a bit. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have mentioned some limitations (linear assumptions), but the small sample size is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1) We provided an algorithm (pseudocode) in the appendix submitted as part of our supplementary materials. 2) We used published data provided by Meshulam et al. here: https://openneuro.org/datasets/ds003233/versions/1.2.0 - please see the original dataset for details about the fMRI acquisition. We included a preprocessing script and directions for executing fMRIprep in the readme file in the supplementary material, and did not perform any additional preprocessing (e.g. slice-time alignment) beyond this. 3) The simulations were intended only to show proof-of-concept behavior for the model, demonstrating that the architecture and fitting procedure is able to capture varying temporal onset/offset of events and topographical differences across individuals while applying increasingly high spatial noise.Overall, simulating realistic fMRI data is an ongoing challenge, given the highly complex spatial and temporal correlations present in a real dataset. We hope to have access to simulated fMRI datasets that are more representative of real data sometime in the future. 4) Our baseline was to use random projections for each subject/model into the latent space, with each element of the project matrix sampled from a standard normal distribution N(0, 1). This approach preserves the temporal structure for each subject (since this projection is held constant within subjects), while disrupting alignment between different subjects. 5) Yes, that is the correct interpretation of Figure 5. We only briefly mention the null distribution in the last line of the captions and should have indicated that they are represented by the gray shaded area. 6) Aligning fMRI data to the stimulus in the real dataset is a much more challenging task than fMRI-fMRI alignment. Rather than relating the same kind of data across people – which tends to have some similarities even before applying spatial or temporal alignment – this is an entirely different model with stimulus features that are only partially related to fMRI responses. This complexity is not straightforward to capture in our models, though note that at higher noise levels we do see performance that is reduced and more variable for the stimulus alignment (since there is only a single stimulus to align, as opposed to multiple subjects). 7) We would obtain R2 = 0 if the stimulus projections, on average, sit at the mean of the fMRI data. This means that random projections are likely to be below 0 (worse) since the stimulus projections will be randomly distributed with respect to the fMRI data. Because the fMRI data and stimulus model come from different representational spaces with differing dimensionalities, there is no simple baseline alignment between the two spaces. 8) We provided all code necessary to reproduce the entire pipeline from preprocessing the publicly available dataset (please see above response for preprocessing fMRI data) all the way to fitting the model in the supplementary material. 9) These axes of semantic variation have been observed in prior fMRI studies. Gilead et al., 2013 (DOI: 10.1016/j.neuroimage.2012.09.073) show differences in activity for concrete vs abstract and future- vs past- and present- tense sentences, and concrete vs abstract concepts engage different regions of the semantic network (Conca et al., 2021; DOI: 10.1038/s41598-021-02013-8). 10) Mapping the trajectories in the latent space to a behavioral measure would be a great validation of the method, but this would require a dynamic behavioral measure of second-by-second cognitive states; no such metric exists for this dataset, and it is also unclear how we could obtain this kind of measure without disrupting a student's comprehension of the material. Instead, we are arguing that our latent mappings are meaningful in the sense that they can generalize across runs (i.e. successfully align new fMRI-stimulus data). Minor 1) Similar to RSA, we make the assumption that event representations are linearly related across subjects and share the same similarity structure. However, RSA does not identify any temporal or spatial alignment, unlike work from Baldassano et al. or Haxby et al., respectively. --- Rebuttal Comment 1.1: Title: Most points addressed, except simulations Comment: Thank you very much for addressing most of my concerns. There are only two points, one major and one minor remaining: Major: 1) The simulation is still a major concern: Since the simulation is supposed to test whether the method works in principle, I think the simulation settings are paramount. This is especially the case in your study, since the empirical data is very small. I would strongly suggest to include simulations with more appropriate settings (larger number of voxels or ROIs, exploring the relationship with temporal and spatial noise). I think reviewer 1rZM made some helpful suggestions in terms of how to go about that. Resting state data that could be added as noise, can be freely downloaded from large repositories. Alternatively, you could use other models such as the Wong-Wang-Deco model (https://doi.org/10.1523/JNEUROSCI.5068-13.2014) to simulate resting-state and add task-based perturbations, or use dynamic causal models (https://doi.org/10.1016/S1053-8119(03)00202-7) to simulate task based data. If you conduct more realistic simulations, I would be willing to raise my score. Minor: 2) I suggest to include the small sample size in the limitation section or do you plan to include a test on another second data set? --- Reply to Comment 1.1.1: Comment: Major: 1. We have posted results and an explanation for an additional simulation experiment. Minor: 2. We have additionally applied the model to data collected during a movie-watching paradigm (from Aly et al., DOI: 10.1162/jocn_a_01308) and observed similar performance for aligning fMRI events across subjects. We can include these results in the final version of the paper as additional validation.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In natural tasks that share input stimuli across participants, the cognitive states or neural responses of different participants might undergo approximately synchronous but slightly jittered dynamics. At the same time, the distribution of neural signals are not exactly consistent across participants at voxels with the same spatial coordinates. These two problems were addressed by two approaches separately by event segmentation and functional alignment. This paper proposes a new approach H-HMM that combines the advantage of both methods. It performed simulation to evaluate its performance and tested it on an fMRI datasets of college students watching the same series of lectures in computer science. The performance appears impressive and achieves what the model is designed for. Further, the paper also demonstrated alignment between fMRI data and semantic features of contents in the lecture. Strengths: * Simultaneous achieving temporal and spatial alignment was not done before. * Comprehensive evaluation of the method is performed on both simulated and real data and showing good performance. * The illustration is generally clear and easy to understand. * The approach holds promise for a wide range of application and should be a significant contribution to the field Weaknesses: * I think there is a mismatch between the data size of simulated data and that of fMRI data or semantic features, making it a bit difficult to evaluate the expected performance of this method in general. The simulated data have only 5 voxels/features, which is rarely observed in the domain where this algorithm should be applied. The described numbers also do not match up: 4 events with 9-12 time points per event give rise to fewer than 48 time points, yet it is said that the simulated data have 60 time points. It is also strange that the event is represented by binary patterns. Maybe this is just for the purpose of easy visualization but I worry that the performance might depend on these unrealistic properties in the simulation. Doing it with similar property as would be expected for the fMRI data should be more convincing. If you encounter issue that the actual fMRI data have lower effective dimensionality due to smoothness, it may be simulated by spatial Gaussian process or other ways but I don't think it is justifiable to start with low-dimensional data, as you did not perform PCA on fMRI data to get similar dimensionality before application of the algorithm. * The description for the update of group-level G, starting from line 122, is a bit confusing. After the stacking, what is the dimensionality of the matrix? I assume the stacked data have a total number of elements as the number of events * (total number of voxels+ semantic features) * D (since E is in the D-dimensional space). If your stacked data have two dimensions (which I assume is the case since you can do PCA on it), which of the two dimensions (after explaining their size) do you treat as data features and which do you treat as samples in the definition of PCA? After projecting the stacked data for each event into D-dimensional PC space, do you need to do anything you do to get G (which is of the shape of number of events * D)? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My major questions are asked in the weakness, which I think are addressable. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1) The simulations were intended only to show proof-of-concept behavior for the model, demonstrating that the architecture and fitting procedure is able to capture varying temporal onset/offset of events and topographical differences across individuals while applying increasingly high spatial noise.Overall, simulating realistic fMRI data is an ongoing challenge, given the highly complex spatial and temporal correlations present in a real dataset. We hope to have access to simulated fMRI datasets that are more representative of real data sometime in the future. The 4 events were randomly selected in each individual to have anywhere from 12 - 18 time points per event. The range reported in the submission was from an earlier round of simulations and mistakenly used instead of the most recent parameters for the simulations. We aimed to ensure that each event would have high variance across individuals in terms of onset and offset times. 2) The voxel/feature vectors for each subject/model are concatenated together for each event, yielding a two-dimensional matrix with rows = events and columns = all concatenated voxels/features. After PCA, this long catenated dimension is reduced, yielding a matrix G where the dimensions are events x D (latent dimensionality). The values in the latent dimensions are the data features corresponding to each respective event. --- Rebuttal Comment 1.1: Comment: Thanks a lot fore replying to my comments. I still like the paper. But I cannot agree with the difficulty of simulation: You can still simulate according to your model but simply increasing the number of voxels to those commonly observed, and generate noise with spatial-temporal Gaussian Process. Also, although not perfect, there are R-based fMRI simulator called neuRosim and a python-based one in BrainIAK. Lastly, one can also add pattern that mimics the spatial-temporal smoothness to resting-state fMRI data (but keeping the HMM and sequential order of patterns) and use resting-state data as noise. All of these will be more realistic than the simulated data used here. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We understand your concerns and have posted results and an explanation for an additional simulation experiment.
null
null
null
null
null
null
Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models
Accept (poster)
Summary: The paper studies the optimal rate for multivariate deviated models. Specifically, they consider the model $(1-\lambda) h(x) + \lambda f(x|\mu,\Sigma)$, where $h$ is known and the goal is to estimate the other parameters. The authors propose to use the notion of *distinguishability* and study the convergence rate of parameters using MLE under both distinguishable and non-distinguishable cases. The authors present three pairs of upper and lower bounds for distinguishable, non-distinguishable but $f$ is strongly-identifiable, and distinguishable but $f$ is a family of location-covariance multivariate Gaussian distributions. Experiments are also provided to corroborate their theoretical results. Strengths: - The paper is clear. The notations are very consistent for such a lengthy work. - There are plenty of explanations and discussions around each main result, making it an interesting paper to read. - To the best of my knowledge, the proofs are technically sound and highly sophisticated. The notion of distinguishability and identifiability seems very suitable and intuitive. - Lower bounds are also presented and match their convergence rate for all three cases considered. Weaknesses: - The idea of distinguishability is not wholly novel. Similar notions have been used in [1] for a different model, which in turn is derived from the notion of *identifiability* adopted in [2] and many other previous works. - The section for related works is very concise. - Given the classical parametric setting and the MLE estimation, I am not very sure if this paper would be a good match for neurips rather than a more statistically-focused journal. [1] Do, Dat, Nhat Ho, and XuanLong Nguyen. "Beyond black box densities: Parameter learning for the deviated components." Advances in Neural Information Processing Systems 35 (2022): 28167-28178. [2] Nguyen, XuanLong. "Convergence of latent mixing measures in finite and infinite mixture models." (2013): 370-400. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - I totally understand the logic behind the organization of the sections but I would suggest shortening Section 3 and moving the results in Appendix A to the main content. Theorem 3.3/3.5/3.6 seem to be intermediate results for proving the convergence rates with artificial distances $\mathcal{K}$, $\mathcal{D}$, and $\mathcal{G}$. So I don't quite understand why they should take up nearly three pages, forcing the main results to be postponed to the appendix. - Say we would like to adopt the model to fit some data in practice. How should we obtain $h_0$ which is claimed to be known in this paper? Or $h_0$ could just be good enough and then this model may take over? Would you please comment on how the results may provide possible guidance to the methodology? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The idea of distinguishability is not wholly novel. Similar notions have been used in [1] for a different model, which in turn is derived from the notion of identifiability adopted in [2] and many other previous works.** Thank you for raising this concern. We would like to emphasize that the other main contribution of the paper is to develop "uniform" convergence rates for parameters $(\lambda^*, \mu^*, \Sigma^*)$, compared to "point-wise" convergence rates from previous work. To see further why this is significant for hierarchical/mixture models such as the one in this paper, let us reconsider the deviated model: $$p^*(x) = (1-\lambda^*) h_0(x) + \lambda^* f(x|\mu^*, \Sigma^*).$$ Suppose that $\lambda^* > 0$ is fixed, $\mu^*, \Sigma^*$ are fixed, and $h_0$ is distinguishable from $f$. Using the proof technique from previous work; it is possible to prove that the MLE $(\widehat{\lambda}_n, \widehat{\mu}_n, \widehat{\Sigma}_n)$ satisfies: $$\mathbb{E} |\widehat{\lambda}_n - \lambda^*| \leq C \left(\dfrac{\log(n)}{n}\right)^{1/2}, \mathbb{E} \|\widehat{\mu}_n - \mu^*\| \leq C \left(\dfrac{\log(n)}{n}\right)^{1/2}, $$ and $$\mathbb{E} \|\widehat{\Sigma}_n - \Sigma^*\| \leq C \left(\dfrac{\log(n)}{n}\right)^{1/2},$$ where $C$ is a constant that depends on $(\lambda^*, \mu^*, \Sigma^*)$ and does not depend on $n$. This is called point-wise convergence rates. However, when $\lambda^* = 0$, any pair of $(\mu^*, \Sigma^*)$ will give the same density, so parameter estimation is not possible. Hence, there must be a transition of convergence rates for $(\mu^*, \Sigma^*)$ as $\lambda^* \to 0$. We demonstrate this point precisely in Theorem 4.1, where the main result claims that $$\mathbb{E}||(\widehat{\mu}_n, \widehat{\Sigma}_n) - (\mu^*, \Sigma^*)|| \leq C \dfrac{1}{\lambda^*}\left(\dfrac{\log n}{n}\right)^{1/2},$$ where $C$ is a constant does not depend on $(\lambda^*, \mu^*, \Sigma^*)$ and $n$. Hence, the convergence rates of $\mu^*$ and $\Sigma^*$ are slower as $\lambda^*\to 0$. This uniform convergence rate allows scientists to obtain more concise confident intervals for parameter estimation and design better experiments in real life. To obtain those uniform bounds, the bounds between density distances and parameter distances that we develop in Section 3 need to be more refined compared to Wasserstein distances in the previous work. It leads to the divergence $\mathcal{K}, \mathcal{D}$ and $\mathcal{G}$, which seem to be quite artificial but technically meaningful. The novelty in the presented technique lies in the careful examination of the bound for each element $\lambda^*$, $\mu^*$, and $\Sigma^*$. In the paper, there are more challenging settings where $h_0$ can belong to the same family of distributions as $f$ and when the distinguishable condition does not hold. Each leads to different convergence rates for $\lambda^*, \mu^*$, and $\Sigma^*$. Then those uniform convergence rates are supported by simulation studies in Section 5 and Appendix F. The rates we obtain are even better than the rates obtained by using the method of moments in the literature. Please kindly refer to the general comment for more details. **Q2: The section for related works is very concise.** This is also a common concern raised by other reviewers. We decided to use the general response to explain the literature further. Please kindly refer to that. We will add the detailed literature review to the revision version. **Q3: Given the classical parametric setting and the MLE estimation, I am not very sure if this paper would be a good match for neurips rather than a more statistically-focused journal.** Thanks for your comment. In our opinion, MLE is the principle estimation for several Machine Learning models including Diffusion models and Transformers, therefore it is not outdated. The idea of deviated models is actually can be used in the big data regime, where $h_0$ may be a pre-trained large model such as Transformer, and $f(\cdot | \mu, \Sigma)$ is a small, low-rank model that is trained to adapt to some downstream task while freezing $h_0$ in training. The deviated proportion $\lambda^*$ can be studied to check how much the model needs to update to study the new and smaller downstream task. This idea is popular in Domain Adaptation recently [1, 2]. **Q4. I would suggest shortening Section 3 and moving the results in Appendix A to the main content. Theorem 3.3/3.5/3.6 seem to be intermediate results for proving the convergence rates with artificial distances $\mathcal{K}, \mathcal{D}$ and $\mathcal{G}$.** Thanks for your suggestion. We will consider it and edit our paper accordingly. **Q5: Assume we would like to adopt the model to fit some data in practice. How should we obtain $h_0$ which is claimed to be known in this paper? In practice, we can see $h_0$ as rising from the domain adaptation problem, where it is estimated from a relevant data set. We then try to modify $h_0$ by a distribution in the vector-matrix family $f(\cdot | \mu, \Sigma)$ to estimate the density of the data set that we are working with [1, 2]. Also, kindly refer to the general response for another example in multiple testing problems, where we consider the distribution of the $p$ value that is obtained from numerous (independent) hypotheses tests. Hence, $h_0$ is uniform distribution on $[0, 1]$. The distribution under $H_1$ is unknown and required to estimate using the deviated component $f^*(x)$. **References** [1] Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). [2] Jiang, Ziyu, Tianlong Chen, Xuxi Chen, Yu Cheng, Luowei Zhou, Lu Yuan, Ahmed Awadallah, and Zhangyang Wang. "DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment." In European Conference on Computer Vision, pp. 239-256. Cham: Springer Nature Switzerland, 2022. --- Rebuttal Comment 1.1: Comment: Authors' response addressed my concerns. I am maintaining my score (6). --- Reply to Comment 1.1.1: Comment: We thank Reviewer HL4W for your positive evaluation of our paper after the rebuttal and for maintaining your score (6).
Summary: In this paper, the authors establish the rate for estimating true parameters in the multivariate deviated model by using the MLE method. They mainly try to address two challenges encountered in deriving the rate of convergence for MLE estimators, i.e. 1) the interaction between the null hypothesis density $h_0$ and the alternative density function $f$, 2) the likelihood of the deviated proportion $\lambda$ vanishing to either extreme points of the interval [0, 1]. To this end, they develop the distinguishability condition to capture the linear independent relation between the function $h_0$ and the density function $f$, and derived the optimal convergence rate of the MLE under both distinguishable and non-distinguishable settings. Strengths: The paper is well-structured and effectively presents the problem setup, theoretical framework, and main results. The definitions and explanations of key concepts are well presented. The authors address a fundamental statistical problem and provide insights into the behavior of the MLE in the multivariate deviated model. The derived convergence rates and minimax rates contribute to the understanding of parameter estimation and hypothesis testing in complex data scenarios. Weaknesses: I feel that in the paper the comparison to existing literature is a bit limited. Particularly, how does this paper compare with the current literature on heterogeneous mixture detection? In the experiment section, it seems that the setup is a bit limited with f being Gaussian. It would be great if the authors could show some more numerical results with more expanded scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does this paper compare with the current literature on heterogeneous mixture detection in terms of assumptions and results? Can the results in Section 3.2.2 be extended to non-Gaussian distributions for f? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: How does this paper compare with the current literature on heterogeneous mixture detection in terms of assumptions and results?** Thanks for your question. Different from the heterogeneous mixture detection literature where most of the results are under specific settings of $h_{0}$ and $f(x|\theta, \Sigma)$, in our paper we are able to provide general theories for more general settings of $h_{0}$ and $f(x|\theta, \Sigma)$. In particular, we develop the novel notion of distinguishability between $h_{0}$ and $f(x|\theta, \Sigma)$, which allows us to characterize the minimax uniform rates of parameter estimation under the general distinguishable settings of these functions. For the settings when these functions are not distinguishable, our results also cover a wide range of $h_{0}$ and $f(x|\theta, \Sigma)$ in practice, including strongly and weakly identifiable cases (e.g., multivariate Gaussian distribution and multivariate Student's t distribution with general covariance matrices). **Q2: In the experiment section, it seems that the setup is a bit limited with $f$ being Gaussian. It would be great if the authors could show some more numerical results with more expanded scenarios.** Thanks for your question. It is indeed true that the mixture model can be used with a much wider range of kernel densities. Although the Gaussian kernel is arguably the most popular, the mixture of Gamma distributions is also of interest when it comes to modeling heterogeneous data on the real line. We can consider the true generative model: $$p^*(x) = (1-\lambda^*) G(x | \alpha_0, \beta_0) + \lambda^* G(x | \alpha^*, \beta^*), $$ where the shape-rate density Gamma is defined by: $$G(x|\alpha, \beta) = \dfrac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x}, \quad \alpha, \beta > 0.$$ The model is known to be strongly identifiable when $\alpha^* \neq \alpha_0$ or $|\beta^* - \beta_0| \neq 1$. So that the result in Theorem A.1 applies in this case. We will provide a simulation study for this model in Appendix F. **Q3: Can the results in Section 3.2.2 be extended to non-Gaussian distributions for $f$?** Thanks for your question. Most kernel densities, such as multivariate Student's t distribution and multivariate Laplace distribution, satisfy the second-order identifiability so that the results in Section 3.2.1 apply to them. For weakly identifiable families, the most popular kernel may be the location-scale Gaussians. Other examples are the Gamma distribution that we discussed previously or the skew normal distribution (cf. [1]). For Gamma distribution, it is interesting that the Gamma densities are only weakly identifiable in a zero Lebesgue measure set so that it is not of the main focus of our paper where we consider convergence rate from testing problems' perspective so that $\lambda^* \approx 0$, $\alpha^* \approx \alpha_0$, and $\beta^* \approx \beta_0$. For skew-normal distribution, it consists of three parameters: the location, scale, and skewness (shape) parameter. When the skewness parameter is 0, the skew-normal distribution becomes the normal distribution. This distribution is useful for modeling asymmetric data. The skew-normal distribution possesses more complex algebraic structures among the location, scale, and skewness parameters (via partial differential equations of the skew-normal distribution with respect to these parameters) than those in the location-scale Gaussian distribution (c.f. equations (2) and (3) in [1]). Therefore, the theoretical results in Section 3.2.2. will be richer and more complicated for skew-normal distribution than those for Gaussian distribution. We leave detailed theoretical analyses of both Gamma and skew-normal distributions for future work. **References** [1] N. Ho and L. Nguyen. Singularity structures and impacts on parameter estimation in finite mixtures of distributions. SIAM Journal on Mathematics of Data Science, 2019. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response. I will maintain my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer UU3o Comment: We thank Reviewer UU3o for your positive evaluation of our paper after the rebuttal and for maintaining your score (7). Best, The Authors
Summary: ## Summary The authors study the minimax rate for parameter recovery in deviated multivariate models. In this setting, we observe samples from a mixture (of *unknown* weight \lambda) of a "null" distribution h_0 and a distribution from a parametric family f( | \mu, \Sigma). The goal is to recover from n samples the (\lambda, \mu, \Sigma). ## Contribution The authors study the MLE performance in various regimes (depending on how "far" is h_0 from the parametric family of distributions. Their analysis is very tight, leading to obtaining the minimax rates. Strengths: I like the result: it is mathematically clean and leads to tight results. It is always nice to see the minimax rates for new problems. Weaknesses: I am unfortunately failing to conclude the position of the paper in the literature. Are the authors the first to obtain results in this setting? If yes, please explain (much) more why studying the model is interesting/important. If not, please compare thoroughly the results with the existing ones. Also, are the techniques used new in any way? Or are the results simply an application of known techniques? All in all, I think the paper needs a much more thorough literature review. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Are the authors the first to obtain results in this setting? If yes, please explain why studying the model is important. If not, please compare thoroughly the results with the existing ones.** Thanks for your questions. We provide below a more thorough comparison with other existing results and hope that it helps to clarify the position of this paper in the literature. 1. This model gained some of its early interest from testing and detection problems in biology [1], where the authors consider the semi-parametric setting: $$p^*(\cdot) = \lambda^* h_0(\cdot) + (1-\lambda^*) f^*(\cdot - \mu_2^*),$$ where $h_0$ is known, and $\lambda^*$, $\mu^*$, $f^*$ are to be estimated from the data. The density $f$ is further assumed to be symmetric but non-parametric. Then the identifiability, consistency and asymptotic normality of parameters are studied. 2. In [2], the authors consider the parametric location deviated model: $$p^*(\cdot) = (1-\lambda^*) N(\cdot | 0, 1) + \lambda^* N(\cdot | \mu^*, 1),$$ where the density $N$ is standard Gaussian, the true parameter $\lambda^*$ and $\mu^*$ are of interest and would be recovered given data. They specifically consider $l^2$ estimation and provide the uniform bound for parameter estimation (i.e., holds for all $(\lambda^*, \mu^*)$, which is similar to our result). It is worth noting that the setting in our paper is more general in the sense that the known component $N(0, 1)$ can be assumed to be any density $h_0$ and the estimated component $N(\mu^*, 1)$ can be any vector-matrix family $f(\cdot | \mu^*, \Sigma^*)$. 3. In [3, 4, 5], a similar setting is considered: $$p^*(\cdot) = (1- \epsilon_n) N(\cdot | 0, 1) + \epsilon_n N(\cdot | \mu_n, 1),$$ in which the main question is: How small $\epsilon_n$ can be compared to the sample size $n$ in order to reliably test (or detect) $\epsilon > 0$ against $\epsilon = 0$. In the so-called dense setting, where they assume $\epsilon_n \asymp n^{-\beta}$ for $\beta\in (0,1/2)$, it is possible to do so when $\epsilon_n \mu_n \gtrsim (\log(n)/n)^{1/2}$, which matches the result that we have in Theorem A.1. We go beyond this setting by also considering varying covariance of the second component $N(\cdot | \mu_n, \sigma_n^2)$. Theorem A.3 implicitly says that it is possible to estimate parameters when $\epsilon_n \mu_n^2 \gtrsim (\log n/n)^{1/2}$ and $\epsilon_n \sigma_n^2 \gtrsim (\log n/n)^{1/2}$ Additional experiments in Section F.2 and F.3 also support this finding. 4. Although this model is statistically interesting and worth studying in the classical sense, please note that its idea is still used in the Machine Learning community [9, 10]. Specifically, $h_0$ can be a pre-trained Large Model, and we wish to adapt it to some specific downstream task. A popular solution is to consider the deviated model with small deviated weight $\lambda^*$ and a simple model $f(\cdot | \mu^*, \Sigma^*)$, which costs less to train compared to $h_0$. By doing that, it does not require to re-train $h_0$ but still can borrow some knowledge learned from it. We hope the theoretical result that we build in our model can shed some light to the estimation problem for this Low-rank adaptation technique. **Q2: Are the techniques used new in any way? Or are the results simply an application of known techniques?** The core of the techniques in our paper is from [6, 7, 8], which allows us to develop lower bounds of density distance by parameter distance using the notion of "identifiability" or "distinguishability." A novel technique here is we look at the convergence rate of each parameter in detail instead of relying on the Wasserstein distance, which could not capture the specific rate for each parameter. Besides, the novelty that we wish to highlight is that our bounds are uniform in the parameter space $( \lambda^*, \mu^*, \Sigma^*)$. Because of that, we can detect whether $(\mu^*, \Sigma^*)$ is estimable when $\lambda^*\to 0$, which is a singular point of the model (i.e., when $\lambda^*=0$, every pair of $(\mu^*, \Sigma^*)$ give the same density). Both the minimax rate and the convergence rate of $(\mu^*, \Sigma^*)$ are then developed in this setting. **Q3: I think the paper needs a much more thorough literature review.** Thanks for your comment. We hope that the answer to your first question can partially help to clarify the current literature. Also, kindly refer to a more detailed review of our general answer. We will include them in the revision of the paper. [1] L. Bordes. Semiparametric estimation of a two-component mixture model where one component is known. Scandinavian journal of statistics, 2006. [2] S. Gadat. Parameter recovery in two-component contamination mixtures: The l2 strategy. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 56, pages 1391–1418. Institut Henri Poincaré, 2020. [3] T. Cai. Optimal detection of heterogeneous and heteroscedastic mixtures. Journal of the Royal Statistical Society: Series B, 2011. [4] T. Cai. Estimation and confidence sets for sparse normal mixtures. Annals of Statistics, 2007. [5] T. Cai. Optimal detection of sparse mixtures against a given null distribution. IEEE Transactions on Information Theory, 2014. [6] Chen, J. H. Optimal rate of convergence for finite mixture models. Annals of Statistics, 1995 [7] N. Ho. Convergence rates of parameter estimation for some weakly identifiable finite mixtures. Annals of Statistic, 2016. [8] P. Heinrich and J. Kahn. Strong identifiability and optimal minimax rates for finite mixture estimation. Annals of Statistics, 2018. [9] Hu, Edward J. Lora: Low-rank adaptation of large language models. arXiv preprint, 2021. [10] Jiang, Ziyu. DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment. In European Conference on Computer Vision, 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response which addressed my concerns. I am maintaining my score (6), but with low confidence, subject to the authors adding all the above literature comments in their paper. --- Reply to Comment 1.1.1: Comment: We thank Reviewer bb5i for your positive evaluation of our paper after the rebuttal and for maintaining your score of weak accept (6).
Summary: The paper studies the problem of parameter recovery in the multivariate deviated model where the data is generated according to the following distribution: $$ (1 - \lambda) h_0 (x) + \lambda f(x | \mu, \sigma) $$ where $f$ belongs to a mean-variance family and $h_0$ is known. One prominent example of such a family is the family of Gaussian distributions. The paper studies the recovery problem under three settings. The first is the distinguishable setting where $h_0$ and the density $f$ are distinguishable (essentially $h_0$ cannot be written as a linear combination of two distributions from the family the family) where statistical recovery is guaranteed at a $\sqrt{n}$ rate. In the second setting of strong identifiability, $f$ is assumed to belong to the mean-variance family and here the convergence rates depend on the closeness of $\mu_0, \Sigma_0$, the parameters of $h_0$, to $\mu, \Sigma$, the parameters of the unknown mixture component. Finally, in the third setting where the mean-variance family is the Gaussian distribution where different convergence behavior is observed from the strongly identifiable setting where a second order PDE guarantees improved performance. From a technical standpoint, the algorithm (MLE) is analyzed in the following way. First, they show that under some mild assumptions on the function class, the MLE solutions approximate the distribution in Hellinger-distance. Then, by noting the relationship between the Hellinger and TV distances, the paper then shows that the for this class of distributions, the distance between the parameters is upper bounded by a constant multiple of the TV distance. The first step follows by standard empirical process theory. The second, however, relies on some intricate recently developed machinery. Roughly speaking, one first shows that the TV and parameter-distances approximate each other locally where the limit of the neighborhood is taken to $0$. Subsequently, a short analytic argument leads to a global approximation guarantee. This technique, while inspired prior work, still takes significant care to execute. Overall, the results in the paper are interesting and the technical contributions seem strong. The fact that the statistical performance in this setting may be distinguished from algorithms that operate on mixture models where both components are unknown is also intriguing. However, Theorem 3.6 is only proved for the univariate setting (Appendix C3) while the rest of the paper focuses on the multivariate setting. Strengths: See main review Weaknesses: See main review Technical Quality: 3 good Clarity: 3 good Questions for Authors: See main review Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Theorem 3.6 is only proved for the univariate setting (Appendix C3) while the rest of the paper focuses on the multivariate setting.** Thanks for your comment. Although the proof of Theorem 3.6 is only proved for the univariate setting, it can be adapted to high-dimensional settings along with some according changes of notations to those settings. Since there are too many scenarios arising in that proof, we consider the univariate setting to avoid the unnecessary complexity of notations which might make the main arguments implicit and the proof untidy. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will retain my current evaluation. --- Reply to Comment 1.1.1: Title: Thank you Comment: We thank Reviewer 9DGR for your positive evaluation of our paper after the rebuttal and for maintaining your score of weak accept (6) with high confidence.
Rebuttal 1: Rebuttal: **General Response** Dear AC and reviewers, We would like to express our gratitude for your constructive reviews, which help us improve our work significantly. There are two common concerns about the literature on the deviated models and the novelty of our paper. Thus, we dedicate this general response to clarify these concerns and then include them in the revision of our paper. **I. Literature of the deviated models** Let us consider the general model: $$p^{\ast}(x) = \lambda^{\ast} h_0(x) + (1-\lambda^{\ast}) f^{\ast}(x),$$ where $h_0$ is known and $(\lambda^{\ast}, f^{\ast})$ are to be estimated from data. Several attempts to provide theoretical guarantees and algorithms for this problem have been made in the literature. Starting from multiple-testing problems (microarray analysis, neuroimaging) [1, 2], the p-values, obtained from the numerous (independent) hypotheses tests, are uniformly distributed on $[0,1]$, under hypothesis $H_0$. Hence $h_0$ is the uniform distribution, and $f^*$ is required to be estimated. When $f^*$ is assumed to be symmetric and non-parametric, [3] provides identifiability and consistency results of the parameter estimation. In the parametric setting of $f^*$, the related work is listed below. 1. [4] considers this model specifically when $h_0 = N(0, 1)$ and $f^* = N(\mu^*, 1)$ are normal distributions. In the setting $\lambda^* = n^{-\beta}$ where $\beta \in (0, 1/2)$, they prove that no test can reliably detect $\lambda^* = 0$ against $\lambda^* \neq 0$ when $\lambda^* \mu^* = o(n^{-1/2})$, while the Likelihood Ratio Test can consistently do it when $\lambda^* \mu^* \gtrsim n^{-1/2+\epsilon}$ for any $\epsilon > 0$. However, no guarantee for estimation of $\lambda^*$ and $\mu^*$ is provided. 2. The uniform convergence of estimating $\lambda^*$ and $\mu^*$ is then revisited in [5], in the same setting, where it provides minimax rate and uniform convergence rates for both $\lambda^*$ and $\mu^*$ under the $l^2$ estimation strategy. They prove the tight convergence rate for $\lambda^*$ and $\mu^*$ when $\lambda^* |\mu^*| \gtrsim n^{-1/2 + \epsilon}$ and $|\mu^*| \gtrsim n^{-1/4}$. However, their technique heavily relies on the properties of the location Gaussian family, which might be difficult to generalize to other settings of kernel densities. 3. When $f^* = N(\mu^*, \Sigma^*)$, it is possible to derive the estimation rate for $(\lambda^*, \mu^*, \Sigma^*)$ from results for the general mixture of two components in the literature. However, those bounds are often less sharp compared to the bound that we developed in our paper due to the lack of information about the known component. In particular, when estimating $\mu^*$ with $\Sigma^*$ being fixed, an application of the results with moment methods from [7] to the deviated models leads to $||\Delta \mu^{\ast}||^{3} |\lambda_{n}^{\text{moment}}-\lambda^{\ast}| = \mathcal{O}(n^{-1/2})$ and $\lambda^{\ast} ||\mu_{n}^{\text{moment}} - \mu^{\ast}||^3 = \mathcal{O}(n^{-1/2})$, which are much slower compared to the results for the MLE in the strongly identifiable and non-distinguishable settings in our work, where $(\lambda_{n}^{\text{moment}}, \mu_{n}^{\text{moment}})$ denote moment estimators of $\lambda^{\ast},\mu^{\ast}$. When we estimate both $\mu^*$ and $\Sigma^*$, an adaptation of the moment estimators from the seminal work [6] to the multivariate deviated models show $(||\Delta \mu^{\ast}||^{6} + ||\Delta \Sigma^{\ast}||^{3}) |\lambda_{n}^{\text{moment}}-\lambda^{\ast}| = \mathcal{O}(n^{-1/2})$, which is also slower than those of the MLE in weakly identifiable setting. **II. The novelty of our paper** **1. Novel settings:** We allow ground-truth parameters $G_*=(\lambda^{\ast},\mu^{\ast},\Sigma^{\ast})$ to change with the sample size $n$, which is closer to practical settings than assumptions in previous work [8]. However, this induces two main obstacles: (i) $h_0$ may belong to the distribution family of $f$, which leads to some interaction between these densities; (ii) The deviated proportion $\lambda^{\ast}$ can go to zero as the sample size is sufficiently large. Then, any pairs of $(\mu^{\ast},\Sigma^{\ast})$ will induce the same model $h_0$, which makes the parameter estimation more challenging. **2. Uniform Convergence Rates:** Since true parameters may vary with the sample size, the convergence rates of parameter estimations in our work are uniform rather than point-wise, as in [8]. Additionally, these rates are able to capture the interaction between the convergences of parameter estimations. **3. Minimax Lower Bounds:** Finally, we determine minimax lower bounds under both distinguishable and non-distinguishable settings. Based on these lower bounds, we deduce that our derived convergence rates are sharp. [1] B. Efron. Empirical Bayes analysis of a microarray experiment. Journal of the American statistical association 96, no. 456 (2001): 1151-1160. [2] S. Robin. "A semi-parametric approach for mixture models: Application to local false discovery rate estimation." Computational statistics \& data analysis 51, no. 12 (2007): 5483-5493. [3] L. Bordes. Semiparametric estimation of a two-component mixture model where one component is known. Scandinavian journal of statistics, 33(4):733– 752, 2006. [4] T. Cai. Optimal detection of heterogeneous and heteroscedastic mixtures. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(5):629–662, 2011. [5] S. Gadat. Parameter recovery in two-component contamination mixtures: The l2 strategy. In Annales de l’Institut Henri Poincaré, Probabilitéset Statistiques, volume 56, pages 1391–1418. Institut Henri Poincaré, 2020. [6] M. Hardt. Tight bounds for learning a mixture of two gaussians. In STOC, 2015. [7] Y. Wu. Optimal estimation of Gaussian mixtures via denoised method of moments. The Annals of Statistics, 48:1987–2007, 2020. [8] H. Nguyen. On Parameter Estimation in Deviated Gaussian Mixture of Experts.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper tackles the issue of parameter estimation in the deviated Gaussian mixture of experts problem using the Maximum Likelihood Estimation (MLE) method. The authors propose new distances and analyze the convergence of MLE under distinguishable and non-distinguishable conditions. Strengths: This paper is in relatively good shape. The results seem to be solid. Weaknesses: The major weakness is the novelty. This paper basically considers a much simpler case than the paper https://huynm99.github.io/Deviated_MoE.pdf. They consider multiple $k$ while this paper considers a single $k$. The definitions, results, organization, and even the notations are almost the same. For example, the hellinger distance and TV distance (quite strange to me but adopted by both papers interesting), although this paper changes the distance from $D$. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. what is the mini-max rate of this problem? how close is the current rate to the min-max rate? 2. What is the difference and technical novelty compared to https://huynm99.github.io/Deviated_MoE.pdf Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The major weakness is the novelty.** Thanks for your comment. We would like to refer you to the General Response section for our elaboration on the novelty of this paper. **Q2: This paper basically considers a much simpler case than the paper [1]. In particular, the authors in that paper consider multiple $k$ while this paper considers a single $k$.** Thanks for your comment. However, we gracefully disagree with the viewpoint that the problem considered our paper is a simpler case than that in [1] for the following reasons: Based on the formulations of the models considered in two papers (ours and [1]), it might seem that our model is simpler as we test whether data are sampled from a distribution with known density $h_0$ (null hypothesis) or to another single distribution (alternative hypothesis), whereas the authors in [1] also test the previous null hypothesis but against another alternative hypothesis which says that data are generated from a mixture of experts. Nevertheless, it turns out to be untrue as the nature of two papers are different. In particular, in [1], the ground-truth parameters are not assumed to change with the sample size, therefore, the main goal is to derive the point-wise convergence rates for parameter estimation. By contrast, we do impose this assumption on our problem setup as mentioned in Question 1. Thus, the objective of our paper is to characterize the uniform convergence rates, which is demanding but more precise than the point-wise counterparts. Hence, our work is not a simpler case of [1], but it lays a foundation to capture uniform convergence rates for parameter estimation as well as minimax lower bounds in the deviated Gaussian mixture of experts. However, these directions are beyond the scope of our paper and we leave them for future work. **Q3: The definitions, results, organization, and even the notations of this paper and [1] are almost the same.** Thanks for your comment, but we gracefully disagree with this claim for the following reasons: 1) Regarding the definitions and notations: notions introduced in our paper such as identifiability and distinguishability or technical tools like the Hellinger distance and Total Variation distance are commonly used in the literature of mixture models to capture the convergence rates for parameter estimation, namely [2, 3]. Thus, we strongly believe that the usage of those ingredients should be considered normal rather than as a weakness. 2) Regarding the results: as we stated in our responses to Question 1 and Question 2, the results introduced in this paper are novel and totally different from those in [1]. 3) Regarding the organization: our paper is organized in a different way compared to [1]. In particular, while we present the Total Variation lower bounds in Section 3, and then ntroduce uniform and minimax rates in Section 4, [1] respectively provides the point-wise rates under distinguishable and non-distinguishable settings in those sections. Moreover, our Section 5 is devoted for experiments, whereas [1] use this section for proof sketch. **Q4: What is the mini-max rate of this problem? how close is the current rate to the min-max rate?** Thanks for your questions. Actually, we already provided the result of minimax lower bounds in Section 4 and Appendix A of our paper. For example, under the distinguishable settings, the minimax lower bounds are given line 316 indicates that the minimax rate of $\widehat{\lambda}_n$ is of order $\mathcal{O}(n^{-1/2r})$ for $r<1$. As a consequence, the convergence rate of $\widehat{\lambda}_n$ to $\lambda^{\ast}$, which is of order $\mathcal{O}(n^{-1/2})$, is sharp. **Q5: What is the difference and technical novelty compared to [1]?** Thanks for your question. Firstly, we would like to refer the reviewer to Question 1 for the difference between our paper and [1]. Secondly, we will elaborate on the technical novelty in our paper as follows: **Uniform Convergence Rates:** under the distinguishable settings, although our work and [1] both point out that the estimation rate for $\lambda^{\ast}$ is of order $\mathcal{O}(n^{-1/2})$, we arrive at different rates for estimating $(\mu^{\ast},\Sigma^{\ast})$ from that in [1] because of the sample size dependence assumption. Specifically, while the authors of [1] show that this rate is of order $\mathcal{O}(n^{-1/2})$, we demonstrate that it should rather be slower since it is actually determined by the convergence rate of $\lambda^{\ast}$ to zero via the following bound: $\lambda^{\ast}||(\widehat{\mu}_{n}-\mu^{\ast},$ $\widehat{\Sigma}_{n}-\Sigma^{\ast})||$ $=\mathcal{O}(n^{-1/2})$. It is clear that these rates are sophisticated and able to highlight the implicit interactions between the convergence rates of various parameter estimations, which remains missing in [1]. To achieve the above rates, we have to face many challenging settings in our proofs. For instance, we first need to make sure that two sequence $G_{n}$ and $G_{\ast,n}$ converge to the same limit $\overline{G}$ under the proposed loss functions. Furthermore, there is still a possibility that the last two components of $G_{n}$ or $G_{\ast,n}$ may not converge to those of $\overline{G}$ under the $2$-norm. Thus, it takes us greater effort to consider all these possible scenarios than in [1] where the authors only need to control the convergence of $G_n$ to $G_{\ast}$. **References** [1] H. Nguyen, K. Nguyen, N. Ho. On Parameter Estimation in Deviated Gaussian Mixture of Experts. [2] S. Gadat, J. Kahn, C. Marteau, and C. Maugis-Rabusseau. Parameter recovery in two-component contamination mixtures: The l2 strategy. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 56, pages 1391–1418. Institut Henri Poincaré, 2020. [3] D. Do, L. Do, and X. Nguyen. Strong identifiability and parameter learning in regression with heterogeneous response. arXiv preprint arXiv:2212.04091, 2022. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 1EFM, We would like to thank you very much for your feedback, and we hope that our response addresses your previous concerns about our paper. However, as the discussion period is expected to conclude in the next few days, please feel free to let us know if you have any further comments on our work. We would be more than happy to address any additional concerns from you. Thank you again for spending time on the paper, we really appreciate that! Best regards, The Authors
null
null
null
null
null
null
Balanced Training for Sparse GANs
Accept (poster)
Summary: This work proposes a metric named balance ratio to represent the balance between the generator and discriminator in dynamic sparse training, and furthermore proposes balanced dynamic sparse training to balance the performance and computation cost. Strengths: (++) There are not many previous works that tried to apply DST to GANs, so this research is very valuable, in my opinion. The insight of the problem of balance between the generator and discriminator is also accurate. (++) The motivation is clearly explained and discussed well. Weaknesses: (----) The experiments are the main problem. The baselines (SNGAN, BigGAN) seem to be out of date. The involved datasets are all small, but larger images (e.g., FFHQ) are not utilized. Recently, GAN frameworks can even generate images larger than 1024x1024. Generating small images is not so challenging today, as powerful frameworks, e.g., diffusion models, have achieved great success. In my opinion, at least 256x256 images should be considered to show the value of the proposed method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Could you please give a reasonable explanation for only involving small datasets? If there is a reason I did not notice, I will correspondingly update my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 4 excellent Limitations: I did not find the discussion of limitations. I also did not notice the potential limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of the value and motivation behind our work. We would like to address the reviewer's concerns as follows: > **Q1. The experiments are the main problem. The baselines (SNGAN, BigGAN) seem to be out of date.** Thank you for your valuable input. However, we respectfully disagree with the notion that BigGAN is outdated compared to SOTA, e.g. StyleGAN-XL. Although StyleGAN is currently the SOTA model for generating high-resolution images, it has some limitations that make it less versatile than BigGAN. For instance, research [8] has shown that the StyleGAN family struggles with generating images that have high inter-class variations, such as those in ImageNet. Therefore, we believe it is reasonable to conduct our experiments on BigGAN, which is still a highly effective and versatile model for image generation tasks. However, we appreciate your suggestion and will keep the latest SOTA models in mind for future work. > **Q2. In my opinion, at least 256x256 images should be considered to show the value of the proposed method.** We want to explain that **(1) we mainly design our experiments following previous baselines, and (2) our contribution is not solely ADAPT's performance.** 1. It is worth noting that the field of GAN dynamic sparse training (DST) is relatively new, and the pioneering work, STU-GAN, mainly draws its conclusions based on the CIFAR-10 dataset. In our experiments, we have observed excellent performance of our proposed methods on CIFAR-10, STL-10, and TinyImageNet datasets. Additionally, running experiments on larger datasets and higher resolutions can be computationally demanding, requiring multiple GPUs and several days to complete a single case. Given these resource constraints, conducting extensive experiments involving multiple density ratios, SDST variants, and two settings on larger datasets may currently be computationally heavy. 2. The primary goal of our paper extends beyond the state-of-the-art (SOTA) method. To provide a concise summary, we accomplish the following: (1) Propose a novel metric to study the balance in sparse GAN training. (2) Introduce and analyze the behaviors of various strategies for STU-GAN, offering valuable insights into the effectiveness and limitations of it. (3) Identify and propose solutions for certain limitations of STU-GAN, thereby contributing to the advancement of dynamic sparse training in the GAN domain. We firmly believe that these findings not only enhance the performance of STU-GAN but also have the potential to pave the way for further research and advancements in this field. Nonetheless, we highly value the reviewer's suggestion, and we are actively working on testing our methods on larger datasets. As soon as these experiments are completed, we will incorporate the results into the next version of our work, providing a comprehensive evaluation of our proposed methods. --- Rebuttal Comment 1.1: Comment: I appreciate the author's efforts in their rebuttal. The additional experiment results have addressed most of my concerns. I still uphold that BigGAN is outdated, at least for CIFAR-10, because the mentioned StyleGAN-XL has achieved an FID of 1.85 on CIFAR-10, which will be very difficult to be surpassed for BigGAN. As the research interest of this work is not lightweight design, there is also no reason to consider saving computing resources especially. However, I agree that the goal of the paper is not to achieve SOTA performance. For most SOTA frameworks, spectral normalization (SN) will be employed to stabilize training. The authors have conducted experiments on SNGAN, demonstrating that the method can work well with the SN technique. Note that SN is also a technique for suppressing mode collapse. As a result, the proposed method is very likely to be effective for SOTA frameworks. It would be better if the authors could provide a simple experiment, applying the proposed method on a SOTA framework with one large dataset, but there is no need to compare it to other SOTA models. Meanwhile, it is not a well-studied topic of DST in GANs. GANs also need new ideas and improvements to gain more advantages in competition with other frameworks. This paper also provides a good motivation, technical analysis, and method. Thus, compared to the lack of an experiment, I think the advantage of this work outweighs the disadvantages, so I have raised my score. --- Reply to Comment 1.1.1: Title: Thank you for your comment Comment: We are grateful for the time and efforts the reviewer spent in reviewing and providing valuable comments and thank you for considering raising your rating. Your suggestions for the experiments are greatly appreciated, and we will certainly incorporate the results into the manuscript once they are available.
Summary: This paper presents a method for dynamic sparse training for GANs. In particular, the authors propose the balance ratio to study the balance status between the generator and discriminator. In addition, a balanced dynamic sparse training strategy is designed by applying BR to achieve a good trade-off between performance and computational cost. Experimental results proved the effectiveness of the proposed method. Strengths: (1) The motivation of balancing sparse GAN training under resource budget is well presented. (2) The explanations and illustrations of the balance ratio and balanced dynamic sparse training are well-formulated and mostly clear and intuitive. Weaknesses: (1) The motivation of applying DST to GAN is not that novel to the community and the authors listed STU-GAN as an example. In addition, the experimental improvements over STU-GAN is trivial according to Table 1-3. (2) The normalized training FLOPs in Table 3 fluctuate a lot and are not linearly consistent with the generator density. It contradicts the basic belief of balanced model size between the generator and discriminator for stable GAN training. (3) The DST strategy is in-time over-parameterization, what is the difference to a dynamic-ratio Dropout? (4) When the DST strategy is applied, the balance ratio cannot explain away other important factors for the GAN performance. For example, when spectral normalization is used in the discriminator, the GAN training can be very stable among a lot of practice even for unbalanced model size between the generator and discriminator. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our work is well-motivated, well-formulated and clear. We hereby address the reviewer's questions. > **Q1. The motivation of applying DST to GAN is not that novel.** **We want to point out that though STU-GAN [6] is the first to apply DST on GAN, it has its limitations and it requires a better understanding.** Our work is novel as it provides a way to analyze DST for GANs, identifies STU-GAN's limitations, and proposes improvements to address these shortcomings. More specifically, in our Section 5, we study STU-GAN through BR and highlight several limitations of it: 1. STU-GAN requires a pre-defined density for the discriminator, lacking a principled method for choosing this value, while the chosen density can significantly impact the final performance of STU-GAN (see Figure 1). 2. STU-GAN may fail when the discriminator is initialized to be weak. In essence, STU-GAN only mitigates the unbalance problem when the discriminator is stronger than the generator. The limitation originates from the fact that (1) STU-GAN does not have a principled way to measure the unbalance, (2) STU-GAN only adjusts the generator. Our work identifies these shortcomings through BR and proposes to address the limitations of STU-GAN. > **Q2. In addition, the experimental improvements over STU-GAN are trivial according to Table 1-3.** We appreciate the reviewer's input, but we respectively do not agree with the reviewer's opinion as (1) the improvement is not trivial, and (2) our goal is not solely to become the SOTA method. 1. **Our improvement is not trivial.** Our improvement is most prominent in very sparse cases. For example, consider the second-best method SDST-Strong-RigL which is well-performing and stable: (1) Table 1, 10\% gen density, SNGAN on CIFAR-10. Our method outperforms it by 2.83 FID with 63\% FLOPs. (2) Table 1, 10\% gen density, SNGAN on STL-10. Our method outperforms it by 17.67 FID with 53\% FLOPs. (3) Table 1, 10\% gen density, BigGAN on Tiny-ImageNet. Our method outperforms it by 1.72 FID with 59\% FLOPs. Similar favorable outcomes are observed in Table 2. Therefore, our method improves the performance of STU-GAN while significantly reducing computational costs. 2. **The primary goal of our paper extends beyond the SOTA method.** To summarize, (1) we propose a metric to study the balance in sparse GAN training, (2) we introduce different strategies for STU-GAN and study the behaviors of them, (3) we identify and propose solutions for certain limitations of STU-GAN. We believe these findings may pave the way for further research in this domain. > **Q3. The normalized training FLOPs in Table 3 fluctuate a lot and are not linearly consistent with the generator density. It contradicts the basic belief of balanced model size between the generator and discriminator for stable GAN training.** We appreciate the reviewer for bringing up this question, and we would like to provide some insights into why the normalized training FLOPs (sum of the generator and discriminator) do not exhibit "linear consistent (scaling)" with the generator density: 1. **The discriminator density is not fixed.** In our experiments, we observed that the discriminator's density may first increase and then decrease until it stabilizes during the training process. This behavior can be observed in Figure 5 and Figure 6, which helps explain why the total training FLOPs do not linearly scale with the generator density. 2. **The balancing density of the discriminator may not linearly scale with the generator density.** It is important to note that the generator's capacity does not necessarily linearly scale with its density. For example, having a generator with twice the number of parameters does **NOT** imply a proportional increase in representation power or capacity. The same holds true for the discriminator. As a result, the non-linear relationship between generator density and discriminator density further contributes to the non-linear scaling of total training FLOPs. 3. **Moreover, due to DST/sparse initialization, the allocation of parameters across different layers may vary for different G/D densities and even different epochs.** Hence, we do not expect the total training FLOPs to exhibit linear scaling with the generator density. > **Q4. The DST strategy is in-time over-parameterization, what is the difference to a dynamic-ratio Dropout?** We appreciate the reviewer's question. The key distinctions between ITOP (or DST in general) and dynamic-ratio Dropout lie in **resulting network sparsity**. During training, pruned weights in DST are set to zero and do not receive updates. In contrast, while dropout sets weights' gradients to zero, their magnitudes remain nonzero and may still receive updates when momentum is used. Consequently, DST results in a sparse network with many zero weights after training, while dynamic-ratio Dropout produces a (normal) dense network. We hope this clarifies the distinction between ITOP and dynamic-ratio Dropout. > **Q5. When the DST is applied, BR cannot explain away other important factors for the GAN performance. For example, when spectral normalization (SN) is used in the discriminator, the GAN training can be very stable among a lot of practice even for unbalanced model size between the generator and discriminator.** We thank the reviewer for the question. However, we want to emphasize that **(1) SN is indeed used in our models, and (2) SN alone is not enough to enable balanced training for sparse GANs.** We want to kindly point out that in our experiments (Section 5), we indeed applied SN to the discriminator. The results shows: 1. SN alone is not sufficient to stabilize the unbalanced model size between the two components. Despite using SN, the sparse GAN training may still exhibit instability due to the unbalanced components. 2. The BR is able to quantify the unbalance of the two components when SN is applied. --- Rebuttal Comment 1.1: Comment: Dear Reviewer PnSi, Following our recent rebuttal submission, we wanted to ensure you've had an opportunity to review our responses. As the reviewer-author discussion period is drawing to a close, we'd value any feedback you might have. Thank you for your attention and understanding. Best regards, Authors
Summary: The paper addresses the challenge of reducing the computational complexity of training GANs by leveraging dynamic sparse training (DST) techniques. The authors propose a novel metric called the balance ratio (BR) to quantify the balance between the sparse generator and discriminator during GAN training. They also introduce a method called balanced dynamic sparse training (ADAPT) to control the BR and achieve a balance between performance and computational cost. The paper begins by providing a thorough background on GANs, sparse training, and the challenges associated with applying DST to GANs. The motivation for the research is clearly explained, emphasizing the need for efficient training methods without sacrificing performance. The proposed metric, BR, is introduced and its significance in measuring the balance between the generator and discriminator is well-established. The methodology section is comprehensive, detailing the steps involved in ADAPT. The authors describe the specific modifications to the GAN training process, incorporating sparse training techniques and controlling the BR. The mathematical formulations and algorithms are clearly presented, making it easier to understand the implementation details. The paper is generally well-written and structured, with a logical flow of ideas. However, there are a few areas where the clarity could be improved. In some sections, the technical details and explanations are a bit dense, making it challenging for readers unfamiliar with the topic to follow along. Providing more intuitive explanations or examples could enhance the accessibility of the paper. Strengths: 1. The paper is well-motivated. 2. Using balance-ratio as a metric to understand generator and discriminator sparsity and affects is interesting and seems very useful. 3. The performance improvement over STATIC demonstrate the effectiveness of DDST. Weaknesses: The technical details and explanations are a bit dense. Providing more intuitive explanations or examples could enhance the accessibility of the paper. Not required but this can be part of Appendix. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful feedback, which recognizes our work is well-motivated, interesting, and effective. Your positive evaluation is both encouraging and valuable. In response to the reviewer's valuable suggestion, we will follow the reviewer's suggestion to provide additional details and establish stronger connections between different sections of the paper in the subsequent versions, thereby improving the overall clarity and coherence of our work. If there are any further questions or concerns, please raise them and we are more than willing to answer them. --- Rebuttal Comment 1.1: Comment: I have read the rebuttals and I am satisfied.
Summary: Motivated by the identified imbalance between the generator and discriminator during sparse GAN training, this work proposes a quantitative metric dubbed balance ratio as an indicator for the degree of balance in sparse GAN training. Leveraging this metric, this work further proposes the ADAPT framework to dynamically adjust the sparsity of the discriminator towards balanced GAN training. Experiments across various datasets validate the effectiveness of the proposed method in achieving a good trade-off between performance and computational cost. Strengths: 1. This work is well-written with a clear logical flow. The well-organized structure from observations to understandings and solutions is appreciated. 2. The proposed balance ratio can well indicate the balance between the generator and discriminator during GAN training, which can potentially serve as a useful metric for the community. Weaknesses: 1. The imbalance between the generator and discriminator has been extensively studied and it is not clear why the proposed quantitative metric (i.e., the balance ratio) can outperform previous indicators or solutions under the scenario of sparse GAN training. The authors are expected to provide a literature review and discuss the key advantage of the proposed method that makes it particularly suitable for sparse GAN training. 2. It is not clear what is the rationale for proposing the two variants ADAPT_relax and ADAPT_strict. What is the expectation of the performance ranking between the two according to the claim "a more interesting observation is that ADAPT_strict sometimes outperforms ADAPT_relax"? 3. For the experimental results, only limited/insufficient baselines are considered. The authors are expected to benchmark the proposed framework with other sparse GAN training methods, e.g., [1][2] cited below. [1] "Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective", T. Chen et al., NeurIPS'21. [2] "Don't be so dense: sparse-to-sparse gan training without sacrificing performance", T. Chen et al., IJCV'23. 4. Since only unstructured sparsity is considered, the reported FLOPs reduction cannot be turned into real-device speedup. The authors are expected to perform experiments under structured sparsity to validate whether the proposed method is still effective. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have listed my questions in the weakness section. I am willing to adjust my scores if my concerns are properly addressed. Minor: Line 256 "while the density of the generator is dynamically adjusted with DDA" => Here "generator" should be the "discriminator"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work targets improving the training efficiency of GANs, thus not suffering from obvious negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that our work is well-written and the proposed metric is useful. We hereby address the reviewer's concerns: > **Q1. It is not clear why the proposed metric can outperform previous indicators or solutions. The authors are expected to provide a literature review and discuss the key advantage of the proposed method that makes it particularly suitable for sparse GAN training.** We express our gratitude to the reviewer for providing constructive suggestions. In fact, in our paper, we have included several related papers [1,2,5,6,7]. We hereby provide a more detailed literature review in the **global rebuttal** and we intend to include it in our forthcoming versions as well. > **Q2. It is not clear what is the rationale for proposing the two variants of ADAPT.** We deeply appreciate the reviewer's insightful question. If we understand correctly, the reviewer raises the question regarding the "rationale" of the more difficult setting and its corresponding variant $ADAPT_{strict}$ (Please correct us if we are wrong). We want to clarify that we do so to **(1) provide a more comprehensive study by evaluating our proposed approach in a different setting, and (2) to further push the limit of DST to achieve even greater reductions in computational costs.** As the reviewer mentions, normally, we would only consider the setting where we can choose discriminators with arbitrary densities (as considered in STU-GAN [6]), i.e., the relaxed setting. Other than that, we further introduce the strict setting, which is much more difficult but ensures more computational savings, which is explained below. The fundamental distinction between the two settings lies in the density constraint imposed on the discriminator. More specifically, in the relaxed setting, the discriminator can have densities in the range $d_D \in (0\\%,100\\%]$, offering a wide range of density choices. In contrast, the strict setting with $d_D^{max}=50\\%$ limits the discriminator to choose densities in the range $d_D \in (0\\%,50\\%]$. To illustrate, let's consider a scenario where we require a stronger discriminator when the density $d_D$ is already at $50\\%$. In the relaxed setting, $ADAPT_{relax}$ has the freedom to increase the discriminator's density from $50\\%$ to $100\\%$ (resulting in a dense discriminator). However, such an option is unavailable for $ADAPT_{strict}$ since we are already utilizing the highest density permitted. Consequently, the relaxed setting ensures a minimum computational savings of approximately $50\\% * C_{G}$ where $C$ is the computational cost, whereas the strict setting guarantees at least around $50\\% * (C_{G}+C_{D})$ computational savings. In essence, the strict setting only permits the use of discriminators with low densities, but it ultimately leads to more substantial computational savings. > **Q3. What is the expectation of the results in the two settings.** Given the constraints it imposes, we expect the performance of $ADAPT_{strict}$ to be inferior compared to $ADAPT_{relax}$ as more restrictions are introduced. > **Q4. For the experimental results, only limited/insufficient baselines are considered. The authors are expected to benchmark the proposed framework with other sparse GAN training methods, e.g.,[6,7] cited below.** We express our gratitude to the reviewer for providing constructive feedback. In response, we would like to elaborate on our answers as follows: 1. We want to kindly point out that, as mentioned in our section 4 (line 137), **STU-GAN [6] is indeed included as one of our baselines**, as it is almost similar to SDST-RigL in our work. 2. The innovative work [7] primarily focuses on lottery ticket finding, which involves a costly train-prune-retrain process, aiming to improve data efficiency. However, our work centers on achieving efficient training. Therefore, we believe that [7] is not a necessary baseline for our specific research objectives. 3. As indicated in [6], STU-GAN has been shown to outperform post-hoc pruning. We have also conducted validation of this finding in Appendix E.3, Table 6, further supporting our selection of STU-GAN as a strong baseline in our study. We hope these elaborations provide better clarity on the choices we made for the baselines. > **Q5. The authors are expected to perform experiments under structured sparsity to validate whether the proposed method is still effective.** While we acknowledge the importance of structured pruning, it is crucial to take into account the current state of the GAN DST field, and the pruning field as a whole. 1. It is important to highlight that a significant number of works (almost all pruning works mentioned in Section 3) in the field continue to focus on unstructured pruning, including follow-up works of LTH, DST, foresight pruning, and others. These works have contributed invaluable insights to the pruning research community. 2. In fact, to the best of our knowledge, the only GAN DST work, i.e., STU-GAN, focuses on unstructured pruning. Our primary objective is to build upon and enhance the potential of STU-GAN by addressing its limitations and extending its capabilities. As a result, while we recognize the significance of structured pruning, it is not the foremost goal of our current work. However, we once again acknowledge that structured pruning plays a crucial role in enhancing model efficiency, and we intend to explore structured pruning extensively in our future research. > **Q6. "while the density of the generator is dynamically adjusted with DDA" => Here "generator" should be the "discriminator"** Thank you for pointing out the typo. We will fix it in the next version. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank the authors for their efforts in providing the rebuttal. Most of my concerns are properly addressed. I tend to accept this paper given its current shape and will further adjust my scores based on the discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback! If you have any additional questions or require further information, please feel free to raise them, and we will be more than happy to address them. --- Reply to Comment 1.1.2: Comment: Dear Reviewer cAMN, We truly appreciate your constructive feedback and the time you've taken to consider our rebuttal. We understand that you may wish to discuss with fellow reviewers. As the reviewer-author discussion period nears its end, we wish to remind you to possibly adjust our submission's score after your discussions, if you still intend to do so. Your thoughtful consideration in this matter is deeply valued. Once again, thank you for your dedication and effort throughout this review process. Best regards, Authors
Rebuttal 1: Rebuttal: We thank the reviewers for recognizing our work is well-written (cAMN, PnSi), useful (cAMN, 6W4E), effective (6W4E), well-motivated (6W4E, PnSi, sFon), and valuable (sFon). In response to the reviewers' requests, we have included the following additional content. > **Literature review requested by reviewer cAMN** Addressing the balance between the generator and discriminator in GAN training has been the focus of various works. However, directly applying existing methods to sparse GAN training poses challenges. For instance, [1,2] offer theoretical analyses on the issue of imbalance but may have limited practical benefits, e.g., they require training multiple generators and discriminators. Empirically, BEGAN [3] proposes to use proportional control theory to maintain a hyper-parameter $\frac{\mathbb{E}[|G(z)-D(G(z))|^\eta]}{\mathbb{E}[|x-D(x)|^\eta]}$, but it is only applicable when the discriminator is an auto-encoder. Unbalanced GAN [4] pretrains a VAE to initialize the generator, which may only address the unbalance near initialization. GCC [5] considers the balance during GAN compression, while its criterion requires a trained (dense) GAN, which is not given in the DST setting. Finally, STU-GAN [6] proposes to use DST to address the unbalance issues but may fail under certain conditions, as demonstrated in our experiments. In summary, the existing approaches cannot be directly applied to balanced GAN DST. The only metric that could potentially be helpful for sparse GAN training is the one presented by BEGAN [3], which has restrictions on the discriminator architecture. Unlike BEGAN, our metric isn't constrained to a specific discriminator architecture. Furthermore, it demonstrated simplicity in computation and effectiveness in a broad range of experiments shown in our paper. [1] Arora, Sanjeev, et al. "Generalization and equilibrium in generative adversarial nets (gans)." ICML, 2017. [2] Bai, Yu, Tengyu Ma, and Andrej Risteski. "Approximability of discriminators implies diversity in GANs." ICLR, 2018. [3] Berthelot, David, Thomas Schumm, and Luke Metz. "Began: Boundary equilibrium generative adversarial networks." arXiv preprint arXiv:1703.10717 (2017). [4] Ham, Hyungrok, Tae Joon Jun, and Daeyoung Kim. "Unbalanced gans: Pre-training the generator of generative adversarial network using variational autoencoder." ICML (2020). [5] Li, Shaojie, et al. "Revisiting discriminator in GAN compression: A generator-discriminator cooperative compression scheme." NeurIPS (2021). [6] Liu, Shiwei, et al. "Don’t be so dense: sparse-to-sparse gan training without sacrificing performance." IJCV (2023). [7] Chen, Tianlong, et al. "Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective." NeurIPS (2021). [8] Kang, Minguk, Joonghyuk Shin, and Jaesik Park. "Studiogan: A taxonomy and benchmark of gans for image synthesis." arXiv preprint arXiv:2206.09479 (2022).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Learners for Realizable Regression: PAC Learning and Online Learning
Accept (oral)
Summary: This paper studies the statistical complexity of realizable regression in the PAC learning and online learning setups. The main results are the following combinatorial conditions that characterize the PAC and online learnability: - PAC learnability by (worst-case) ERM learner is equivalent to having a finite $\gamma$-graph dimension for all $\gamma \in (0, 1)$. - PAC learnability is equivalent to the finiteness of $\gamma$-one-inclusion graph dimension for all $\gamma \in (0, 1)$. - The minimax cumulative loss in online learning is characterized (up to a constant factor) by the online dimension. The combinatorial dimensions are above are newly introduced in the paper. The authors also conjectured that the DS dimension in the literature also characterizes PAC learnability. In addition, the paper provides several other examples that shed light on the landscape between learnability, uniform convergence, and other complexity measures of the hypothesis class (Figure 1). Strengths: This paper studies a fundamental problem in learning theory, which has, surprisingly, been left open for several decades. The results are strong and comprehensive, and the authors did a great job in introducing the prior results and presenting the high-level roadmaps behind the technical proofs. Weaknesses: My only complaint is on the short conclusion and a lack of discussion on future directions (apart from the obvious one of proving Conjecture 1). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Regarding Conjecture 1: - Could you elaborate on the obstacle that prevents the approach of [BCD+22] to be applied towards the regression setting? - Are there any evidence/heuristic arguments that support the conjecture? Are there interesting assumptions under which finite $\gamma$-DS dimension implies learnability? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: This is a theory paper and its limitations lie in the assumptions on which the validity of the results rely, including the realizability assumption and the focus on PAC and online learning. This has been formally stated in the paper, and also explicitly mentioned in the title and abstract. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the positive feedback on the significance and presentation of our results and the interesting questions and suggestions. > *My only complaint is on the short conclusion and a lack of discussion on future directions (apart from the obvious one of proving Conjecture 1).* In the next version of our draft, we will include a more detailed conclusion section where we will summarize the main contributions of our work and the important next steps. Another future direction, that is not directly related to Conjecture 1, is to better understand the gap between the fat-shattering dimension and the OIG-based dimension. In particular, it would be interesting to come up with examples of ``natural’’ hypothesis classes, other than the one we provide in Example 1, which witness the fact that the fat-shattering dimension does not characterize learnability. >*Could you elaborate on the obstacle that prevents the approach of [BCD+22] to be applied towards the regression setting?* In [Brukhim et al., 2022], the authors start by showing that if the DS dimension of $H$ is bounded by $d$, then using the OIG algorithm they can derive a learner that has error at most $d/(d+1)$. Notice that this learner is very weak, but has non-trivial guarantees. Subsequently, they use non-trivial arguments that go through list-PAC learning and sample-compression schemes to boost this very weak learner. This step is crucial for the multiclass setting since it reduces nontrivially the infinite label space. However, in the realizable regression setting, it is trivial to derive a learner that has error at most $1/2$. Indeed, if we focus on the $\ell_1$ loss and $Y = [0,1]$, by always predicting $1/2$ we can design such a learner. To the best of our knowledge, using the definition of the $\gamma$-DS dimension in a similar way as in [Brukhim et al., 2022] does not result in a non-trivial learner in the regression setting. This is the main and crucial difference between classification and regression. >*Are there any evidence/heuristic arguments that support the conjecture? Are there interesting assumptions under which finite gamma-DS dimension implies learnability?* Let us first elaborate on the connection between OIG and the DS dimension. We will focus on the multiclass setting, studied in [Brukhim et al., 2022]. Our Conjecture 1 essentially claims that this connection extends to the realizable regression task. Interestingly, there is some notion of “duality” between one-inclusion graph algorithms and pseudo-cubes (the combinatorial objects that define the DS dimension). In particular, Lemmas 12 and 13 in [Brukhim et al. 2022] show that there is a certain duality between orientations of OIG and the DS dimension. In particular, let us consider a class $H$ with DS dimension $d$. Then intuitively even if the algorithm is given $d$ labeled examples, then any orientation of the OIG will have large out-degree (which means that it will be a bad learner). More to that, if the algorithm is given $d+1$ labeled examples, then there exists a good orientation (one with small out-degree) which implies the existence of a good learner. These intuitive statements shed light to the connection between the DS dimension and learnability through the OIG structure. Deriving sufficient conditions is actually an interesting question for future work, and probably easier than proving the conjecture to its full extent. The reason we believe it is true is that, similar to the multiclass classification problem, the $\gamma$-DS dimension feels more ``natural’’ than the dimensions that have been proposed in the past, and seems to be capturing the learnability problem in a tighter way. Moreover, its definition is closely related to the outdegree of the OIG (as we discussed in the previous paragraphs), which, as we have shown, is the quantity that controls learnability in this setting. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed answers to my questions. I don't have further questions and my positive evaluation of the paper is unchanged.
Summary: This paper develops optimal learners and characterizes learnability with new combinatorial dimensions for realizable regression (where the best predictor has zero regret) in PAC and online learning, significantly depicting the landscape of learnability in PAC/online learning. For PAC learning, they show that: - the PAC learnability in the realizable regression by a worst-case ERM iff the gamma graph dimension is finite - learnability in the realizable regression is fully characterized by a finite gamma One-Inclusion dimension - finite gamma-DS dimension is a necessary condition for PAC learnability in the realizable regression For online learning, they devise a new combinatorial dimension, namely online dimension that is built up on the scaled Littlestone dimension. They show that the online dimension characterizes the minimax instance optimal cumulative loss up to a constant factor and design an optimal online learner. Strengths: - Significant results that complete the landscape of learnability of PAC/online learning in realizability regression Weaknesses: - None that I know (note that this problem area is not my research domain) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper might need to discuss the limitations of the results and analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the positive feedback on the significance of our results. > *The paper might need to discuss the limitations of the results and analysis.* We believe that the main limitation of our work is that the OIG-based dimension we propose is more complicated than the dimensions that have been proposed in the past, like the fat-shattering dimension (which, as we explain, does not characterize learnability in the realizable regression setting). Nevertheless, despite its complexity, this is the first dimension that characterizes learnability in the realizable regression setting. More to that, our work leaves as an important next step is to prove (or disprove) our conjecture that the (combinatorial and simpler) $\gamma$-DS dimension is qualitatively equivalent to the $\gamma$-OIG dimension. We will add a discussion on the limitations in the first revision of our manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. After enriching myself further with the relevant literature, I think the contributions in this paper are solid on fundamental levels and add important progress in the learning theory community. I thus increased my score from 7 to 9, and my confidence from 2 to 4. --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for taking the time to familiarize themselves further with the literature and for appreciating our contributions.
Summary: This paper introduce some dimensions that characterize PAC learnability for realizable regression. The authors introduce $\gamma-$ Graph dimenion which is necessary and sufficient for PAC learnability by ERM, and $\gamma-$OIG dimension which is necessary and sufficient for PAC learnability. $\gamma-$DS dimension is introduced which is necessary and conjectured to be sufficient. There are also results for online learning. Strengths: PAC learnability for realizable regression is characterized by an appropriate dimension. This seems to be an important open problem that is resolved. Weaknesses: The various dimensions are hard to understand. It would be nice to see examples. For instance, lines 188-192 were not particularly helpful to understand Definition 5, since I am not sure what it means for $\mathcal{H}$ to contain a cube. Do you mean there is a hypercube of a certain size embedded in every function in $\mathcal{H}$? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 263, is there some typo? maybe you dont mean $\forall i$. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the positive and insightful feedback and questions. > *The various dimensions are hard to understand. It would be nice to see examples. For instance, lines 188-192 were not particularly helpful to understand Definition 5, since I am not sure what it means for H to contain a cube. Do you mean there is a hypercube of a certain size embedded in every function in H ?* Let us first give some intuition behind the definition of the fat-shattering and $\gamma$-Natarajan dimensions. The other dimensions follow in a similar manner. The crucial idea is to understand what it means to shatter a set of points in each definition. Then the associated dimension is the maximum size of a set shattered by the hypothesis class. Fat-shattering dimension is a natural way to quantify how well the function class can interpolate (with gap $\gamma$) some fixed function. Crucially this interpolation contains only inequalities (see Definition 4) and hence (at least intuitively) cannot be tight for the realizable setting, where there exists some function that exactly labels the features. Example 1 gives a natural example of a class with infinite fat-shattering dimension but which can be learned with a single sample in the realizable setting. Before explaining the $\gamma$-Natarajan dimension, let us begin with the definition of the standard Natarajan dimension [Natarajan, 1989]. We say that a set $S = \\{x_1,...,x_n\\}$ of size $n$ is Natarajan-shattered by a concept class $H \subseteq \mathcal{Y}^\mathcal{X}$ if there exist two functions $f,g : S \to \mathcal{Y}$ so that $f(i) \neq g(i)$ for any $i \in S$ and for any $b \in \\{0,1\\}^n$ there exists $h \in H$ such that $h(x_i) = f(x_i)$ if $b_i = 1$ and $h(x_i) = g(x_i)$ if $b_i = 0.$ Note that here we have equalities instead of inequalities (recall the fat-shattering case). From a geometric perspective (see [Brukhim et al., 2022]), this means that the space $H$ projected on the set $S$ contains the set $ \\{ f(x_1), g(x_1)\\} \times … \\{ f(x_n), g(x_n) \\} $. This set is "isomorphic" to the Boolean hypercube of size $n$ by mapping $f(x_i)$ to 1 and $g(x_i)$ to 0 for any $i \in [n]$. This means that the Natarajan dimension is essentially the size of the largest Boolean cube contained in $H$. The $\gamma$-Natarajan dimension is the scaled version of the above definition. The only modification is that the two functions $f,g$ map $S$ to $[0,1]$ and we require that not only $f$ and $g$ are everywhere different but we have that the distance between $f(x_i)$ and $g(x_i)$ is at least $\gamma$. Nevertheless, the geometric perspective is still the same: $\gamma$-Natarajan dimension is the size of the largest Boolean cube contained in $H$. This is what we mean after Definition 5. Examples 3 and 4 contain some examples computing $\gamma$-Graph and Natarajan dimensions. Adding more intuitive examples is an important direction for future revisions. We will make the above discussion more clear in the first revision of our work. > *Line 263, is there some typo? maybe you dont mean \forall i* This is indeed a typo, thanks for bringing it up. We meant to say $\forall j \in [n] \setminus \{ i \}$, not $\forall i, j$. We will fix it in the next version of our draft.
Summary: This work analyzes the realizable regression and connects it with several notions of dimensions. They care about the online learning and the PAC learning. They first show that the $\gamma$-OIG dimension characterizes the PAC learning and that PAC learning requires finite $\gamma$-DS. Finally, they show that for the online regression, the authors find a dimension that characterize it. They show that this dimension is an upper bound over the cumulative loss and it is a lower bound up to some constant. Strengths: The paper provides complexity results for regression in online learning and PAC learning. In binary classification, we have a better understanding of the complexity and how different dimensions connect. In the regression setting, we do not know a lot and this paper provides a very good understanding and nice results. The paper is well written and explains the previous work well. Weaknesses: Not a weakness, but can the authors explain why is there a requirement for bounded labels? What happens if the labels are not bounded? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The question in the weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: no limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the positive feedback regarding the importance and the clarity of our results. > *Not a weakness, but can the authors explain why is there a requirement for bounded labels? What happens if the labels are not bounded?* We would like to mention that, in general, we do not require that the label space is bounded. In contrast, we have to assume that the loss function takes values in a bounded space. This is actually necessary since having an unbounded loss in the regression task would potentially make the learning task impossible. For instance, having some fixed accuracy goal, one could construct a learning instance (distribution over labeled examples) that would make estimation with that level of accuracy trivially impossible. We will clarify this point in the first revision of our work.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides combinatorial dimensions that characterize realizable regression in both batch as well as online settings. Moreover, it provides minimax optimal learner up to polylog factor in the batch setting and minimax optimal learner in the online setting. Strengths: 1. The paper is well-written, easy to follow, and solves an important open problem of characterizing realizable learnability for real-valued function classes. 2. The paper uses classical ideas such as Median Boosting algorithm, sample compression schemes as well as some recent developments in PAC learning theory such as partial concept classes, OIG based dimensions, etc. Overall, the paper is technically sound and is definitely an important technical contribution to the field. 3. In online setting, the paper introduces a novel idea of summing scales along each branch of the tree and defining dimension as the sum of scales. This is a novel and useful technical tool as it provides a new way of defining dimensions that are not parametrized by a scale even though some form of scale is inherent to the problem setting. Weaknesses: Although the paper does provide a combinatorial characterization of realizable regression, I am not sure if the OIG-based dimension is very insightful. Theoretically, it is a useful abstraction as it has a finite-character property and thus the learnability of the problem can, at least technically, be determined using finitely many domain points and functions in function classes. However, the practical utility of such dimension is questionable. Can be computed for natural classes such a linear classes, Lipschitz classes, and so forth? Computing upper bounds is generally difficult even for classical dimensions like VC and fat-shattering, but the lower bounds of these dimensions are typically easy to compute for some natural classes because of simplicity of their shattering conditions. Is it also the case for this OIG based dimension? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I assume that fat-shattering dimension upper bounds the OIG based dimension proposed here. Is there a combinatorial proof of this fact? Also, is there a general property of the class that guarantees that the finiteness of OIG based dimension and fat-shattering dimension co-incide? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the positive feedback and insightful questions. > *Although the paper does provide a combinatorial characterization of realizable regression, I am not sure if the OIG-based dimension is very insightful. Theoretically, it is a useful abstraction as it has a finite-character property and thus the learnability of the problem can, at least technically, be determined using finitely many domain points and functions in function classes. However, the practical utility of such dimension is questionable. Can be computed for natural classes such a linear classes, Lipschitz classes, and so forth? Computing upper bounds is generally difficult even for classical dimensions like VC and fat-shattering, but the lower bounds of these dimensions are typically easy to compute for some natural classes because of simplicity of their shattering conditions. Is it also the case for this OIG based dimension?* In general, we believe that it is difficult to compute the $\gamma$-OIG dimension. Nevertheless, $\gamma$-OIG dimension is the first complexity measure that tightly characterizes realizable regression. More to that, we propose the much more combinatorial $\gamma$-DS dimension which we conjecture to be the right dimension for this setting. As the reviewer suggests, it would be interesting to compute the OIG-based complexity measure for natural and useful complexity classes. For instance, for the class of linear functions $f(x) = a \cdot x$, when the features are single-dimensional then $\mathbb{D}^{\mathrm{OIG}}_\gamma = 1$ (one sample suffices; each hyperedge of the OIG is a single hypothesis) and the OIG dimension scales with the dimension of the feature space in higher dimensions. Similar analysis can be done for the affine case. Deriving bounds for other families of functions is an important yet non-trivial question. > *I assume that fat-shattering dimension upper bounds the OIG based dimension proposed here. Is there a combinatorial proof of this fact? Also, is there a general property of the class that guarantees that the finiteness of OIG based dimension and fat-shattering dimension coincide?* The work of Mandelson (2002) provides a sufficient and natural condition that implies finiteness of both measures. In particular, Section 5 of this work shows that classes that contain functions with bounded oscillation (as defined in the aforementioned paper) have finite fat-shattering dimension. This implies that the class is learnable in the agnostic setting and hence is also learnable in the realizable setting. As a result, the OIG-based dimension is also finite. So, bounded oscillations are a general property that guarantees that the finiteness of OIG-based dimension and fat-shattering dimension coincide. For the first part of the question, we are not familiar with a combinatorial proof of this statement. Investigating such connections would be an interesting direction for future work. [Mandelson, Improving the Sample Complexity Using Global Data, 2002] --- Rebuttal Comment 1.1: Comment: Thank you for answering my question and addressing my concern about the potential weakness. I will be eagerly following the progress on the conjecture regarding $\gamma$-$\text{DS}$. Overall, the paper tackles a foundational problem of learning theory. The results are significant and the techniques are sound, so I think the paper deserves a highlight at the conference. I am happy to raise the score to 8. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to read our rebuttal and for appreciating our work. We will make sure to address the questions that you and the rest of the reviewers raised in the next version of our draft.
null
null
null
null
null
null
Visual Instruction Tuning
Accept (oral)
Summary: This paper introduces the first attempt to extend instruction-tuning paradigm to multimodal domain. This work has several major contributions: (a) the curation of the first vision-language instruction-following dataset by converting public image-text pairs into appropriate format using ChatGPT, resulting in over 100K+ multimodal instruction-following samples, (b) results indicating that a multimodal model (consisting of a CLIP visual encoder, a linear projection layer to convert visual tokens into language prompts, and a LLaMA language decoder) trained on this dataset can achieve robust multimodal chatting abilities. All assets used in their research including datasets and models are open-source. Strengths: There are two major novel contributions of this work: (1) it introduces one of the first large-scale instruction-following multimodal datasets by leveraging public image-text pairs, and (2) it releases all training code, pre-trained models, and evaluation benchmarks to the wider public. These assets (outlined in supplemental L64) are undeniably valuable to the multimodal research community. Weaknesses: I have several major concerns about (a) the evaluation benchmarks and metrics, (b) the lack of simple baselines such as captioning-based approaches, (c) missing implementation details such as the sampling procedure. A: Issues about quantitative analysis for multi-modal chatting. - The paper uses rather small evaluation sets (L217-235) to construct the LLaVA-Bench, including 30 randomly selected COCO images and 24 in-the-wild images. Why is this subset is much smaller than the pre-training dataset with 100K+ multimodal instruction-following samples? And how do you select the 24 in-the-wild images? I couldn't find evidence in the current draft to suggest that these 24 images are not cherry-picked. - The evaluation is text-only and the authors use GPT4 to explicitly assign a score. While prior works such as Vicuna [1] also uses GPT4 to score their responses in a text-only fashion, it is unclear how robust is GPT4 for multimodal reasoning while doing text-only evaluation. For more robust quantitative analysis, I would encourage the authors to split the instruction-following datasets into train/val splits and also include the results of classic text-only scoring metrics. Small-scale human evaluation will also be beneficial. B: Language prior of ScienceQA benchmark. - I am shocked that a text-only (vision-blind) GPT4 can achieve as high as 82% accuracy on ScienceQA, suggesting that this particular VQA benchmark has severe language prior [2]. Even though prior works also adopt this benchmark in their evaluation, this makes it hard to interpret the progress achieve by LLaVA towards a truly “multimodal” instruction-following agent as this benchmark can be largely addressed by language prior information. - Is it possible to report zero-shot LLaVA performance on ScienceQA? C: Simple baselines such as dense captioning: - Even though the model architecture of LLaVA looks elegant as it only uses a linear projection to connect CLIP’s visual tokens to soft language prompts, I believe an even simpler baseline is to train a dense captioning model (using the existing rich descriptions generated by prompting ChatGPT with caption+bounding box information). During inference time, the dense captioner can turn an image into a rich textual description, which can be sent to an instruction-following text-only LLM (Vicuna/GPT4). D: Missing implementation details such as sampling. - The sampling procedure (e.g., top-k/nucleus sampling/beam search) can have profound impact on the quality of generated texts. However, the current draft does not discuss how to perform sampling for LLaVA. Also, when using GPT4 for text-only evaluation, the exact hyperparameters used such as temperature should also be reported. E: Generalization or bias? - Fig. 5 in appendix suggests that LLaVA is able to generalize to unseen domains, i.e., correctly identifying that the person holding a doge coin is Elon Musk, while Elon Musk does not appear in LLaVA’s training dataset. However, it is unclear whether this is a result of generalization or language bias of LLMs. Perhaps your model tends to answer “Elon Musk” when asking about the name of the person, or perhaps it tends to answer “Elon Musk” when there is a doge coin in the image. One minor typo: L99: “and curate such a questions list” -> “to curate such a list of questions” Finally, I have an doubt about "multimodal instruction-following" (this is not a weakness but open to discussion): - Studies in NLP such as [3,9] have suggested that instruction-following is effective mostly because LLMs such as LLaMA are already capable foundation models, and therefore instruction-following can effectively align the model output with human interest. However, it is unclear whether multimodal foundation models such as CLIP (as used in LLaVA) is powerful enough. For example, a wide range of recent works and benchmarks [4,5,6,7,8] suggest that CLIP behaves like bag-of-words and do not have strong vision-language reasoning capabilities. As we do not yet have strong enough vision-language foundation models, it is unclear if the multimodal research community is ready to embrace the instruction-following paradigm. [1] Vicuna. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. [2] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. Goyal et al. 2016. [3] LIMA: Less Is More for Alignment. Zhou et al. 2023. [4] When and why vision-language models behave like bags-of-words, and what to do about it? Yuksekgonul et al. 2022. [5] Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. Thrush et al. 2022. [6] CREPE: Can Vision-Language Foundation Models Reason Compositionally? Ma et al. 2022. [7] Equivariant Similarity for Vision-Language Foundation Models. Wang et al. 2023. [8] Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores. Lin et al. 2023. [9] The False Promise of Imitating Proprietary LLMs. Gudibande et al. 2023. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: I summarized my most concerned questions about this work: - Why did you not sample train/val splits for evaluating LLaVA's multimodal chatting abilities? Are there specific concerns? - Is it possible to extend LLaVA's evaluation to other VQA benchmarks (as reported by GPT4) such as VQA2.0 which has balanced language prior? - Why is the architecture design of LLaVA more superior than a dense captioning model + instruction-following LLM, if both are trained on the same dataset? - What is the sampling procedure of LLaVA? Given that this paper presents a significant dataset contribution, I would be happy to revise my rating if the authors can address my above-mentioned weaknesses and questions. Updated in Aug 17th: I have increased my rating based on the author's promise to revise the paper by including more discussion on more scientific evaluation metrics and benchmarks for MLLMs. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors discuss about limitations in supplemental material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Why do we use a small LLaVA-Bench-COCO split with only 30 images?** Since we divide the question into three categories, we have 90 questions for COCO and 60 questions for In-the-Wild. The amount of the questions in our test set is similar to Vicuna-Bench [1], which has 80 questions. The reason we consider small evaluation sets is due to the cost using GPT-4 evaluation, the current size allows more accessibility for e.g., university labs to report results. **Q2. How do we select the 24 in-the-wild images?** Different from exsiting benchmark that evaluates a single specific capablity of multimodal models, we are seeking for a benchmark, where each evaluation sample can measure multiple integrated capabilities of a model. Such capabilities include recognition, OCR, knowledge, language generation, spatial awareness, counting, etc. To design a benchmark covering a wide range of capabilities, while keeping the evaluation cost affordable, we find 24 samples that require multiple multimodal capabilities. Besides, by using the description-based annotations for each image, we are able to create and extend more questions to improve the capability coverage. **Q3. Split to train/val splits** LLaVA-Bench-COCO follows the train/val split in machine learning. LLaVA-Instruct-158K uses COCO train 2014 images and annotations, while it samples 30 validation images from COCO val 2014 (L217). Both follow the same data generation pipeline (Sec. 3). It serves as validation to study model alignment and capabilities with consistent visual inputs (L219-L220). A small size is chosen, similar to minival split practice, for quick evaluation / iteration during development. **Q4. Language-prior in ScienceQA and zero-shot performance** ScienceQA has three question modes: text, image, and no context (Table 7, TXT/IMG/NO). The default evaluation pipeline of ScienceQA includes a "random guess" mechanism, which helps text-only GPT-4 but not LLaVA. Without this, GPT-4's IMG accuracy drops to 59%, suggesting a more balanced language prior. As suggested by the reviewer, we provide two further experiments: (1) zero-shot performance of LLaVA and Vicuna; (2) Vicuna finetuned on ScienceQA. Since open-source models like Vicuna and LLaVA are still not good at following “format” instructions like "conclude your answer with `The answer is`", we use ChatGPT-3.5 to reformat the answer, following the recent practice in multimodal evaluation. We show that zero-shot LLaVA outperforms both GPT-4 (+7.5%) and Vicuna (+10.3%) on IMG modality. Besides, zero-shot LLaVA consistently outperforms zero-shot Vicuna, in all categories including text-only questions. We also show that the finetuned LLaVA outperforms the finetuned Vicuna in almost all categories, with an average performance gain of 5.2%. | ZeroShot| NAT | SOC| LAN| TXT| IMG| NO| G1-6 | G7-12 | AVG| |--|--|--|--|--|--|--|--|--|--| | GPT-4 | 77.44 | 64.23 | 86.45 | 74.73 | 59.05 | 90.38 | 78.49 | 74.36 | 77.01 | | Vicuna | 66.83 | 60.63 | 69.00 | 65.69 | 56.27 | 71.71 | 68.94 | 60.98 | 66.09 | | LLaVA | 71.27 | 74.24 | 70.91 | 70.43 | 66.53 | 72.89 | 75.4 | 65.33 | 71.8 | | Finetuned | NAT | SOC| LAN| TXT| IMG| NO| G1-6 | G7-12 | AVG| |--|--|--|--|--|--|--|--|--|--| | Vicuna | 86.9 | 79.3 | 88.55 | 85.58| 76.85 | 91.43 | 85.68 | 85.83 | 85.73 | | LLaVA | 90.36 | 95.95 | 88 | 89.49| 88 | 90.66 | 90.93 | 90.9 | 90.92 | We'll include these results and discussion in the revision. **Q5. Dense captioning model + LLM vs end-to-end** There are two conceptual advantages. - Completeness of image representations. Dense captioning may not be able to capture all details of an image that the user instruction is concerned with. In contrast, an end-to-end multimodal model can be instruction-aware and only focus on the relevant visual contents via attention mechanism. See Rebuttal Fig. 2 for a qualitative example. - Single model. End-to-end models save computational resources and reduce complexity in model serving and request handling, which is beyond the scope of this paper. **Q6. Sampling Details** We set LLaVA's temperature to 0.2, and tuning other hyper-parameters does not further improve the quality. Beam search improves LLaVA-13B output quality (67.3->69.8), while it is not trivially compatible with real-time UI like ChatGPT. We report all numbers without the beam search to make the evaluation consistent with the user interface. For GPT queries, we follow Alpaca to set the temperature and top_p to 1.0. We will include this in revision. **Q7. Elon Meme: Generalization or bias** This is a great question that is worth further studying. We designed two sets of images in Rebuttal Fig. 1 to verify. We find this study intriguing and it further supports it being a form of generalization of our model. **Q8. Is CLIP powerful enough for multimodal instruction-following?** Thanks for bringing up this interesting topic. We agree with that instruction tuning plays a more important role in guiding capable foundation models to follow human intents, rather than adding new knowledge. In multimodal settings, we leverage the existing capabilities of two models: LLM for language knowledge and CLIP for image-text alignment, rather than making each individual of them stronger. LLaVA tuning largely aligns the capable foundation LLM to understand image-related human intent. Interestingly, new emerging properties occur when combining the two existing capabilities like OCR in the wild. Grid features or the pooled features on [CLS] token of CLIP on their own may lose spatial information (behave like bag-of-words). However, in LLaVA’s design, we feed the raw image patch features into the LLM, and the positional embeddings of LLM are incorporated into the visual token representations, implicitly maintaining the spatial information of the visual inputs. [1] Chiang, et al. "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.". --- Rebuttal Comment 1.1: Title: Follow-up questions Comment: Thanks for the detailed response. Some of my questions in the original review were not answered and I would still appreciate a discussion on these topics: **Q1. "While prior works such as Vicuna [1] also uses GPT4 to score their responses in a text-only fashion, it is unclear how robust GPT4 is for multimodal reasoning while doing text-only evaluation."** After reading the response, I am even more concerned about GPT-4 text-only evaluation: - It costs money to perform evaluation, which limits the custom test-set in this paper to 30 COCO and 24 in-the-wild images. - GPT-4 performance is not stable across time. - As mentioned by authors in response to Q5, the textual annotation of an image may not capture all the details in an image. - While prior works such as Vicuna also use GPT4 to output numerical scores, it is hard to measure how accurate the numerical score is. **Q2. "Are the 24 in-the-wild images cherry-picked?"** By cherry-picking I actually meant **selecting test samples based on the performance of LLaVA**. While I would like to believe that the authors did not do this, this evaluation set just seems too small for robust and scientific benchmarking. Although it covers a diverse range of skills, I would like the authors to provide more discussion in the revised paper about **what is the scientific way to benchmark such multimodal instruction-following models.** For example, should one be using larger and standardized benchmarks such as VQA2.0 and GQA (or some more recent multimodal benchmarks)? Are there better and more reproducible evaluation metrics than GPT-4's raw numerical scores? Because LLaVA is one of the first works on this trending topic, I believe it is important to answer these questions such that the community can move in the right direction. After reading the response to Q4, I would like the authors to clarify a few things about ScienceQA experiments: **Q3. What is the "random-guess" mode, and why does it help with text-only GPT4? I couldn't find the term "random guess" in the original ScienceQA paper or their github repo, so I would like a more detailed explanation here.** **Q4. How does the text-only GPT4 and zero-shot/fine-tuned Vicuna baseline utilize the image context? Do they use the image caption from the ScienceQA dataset?"** --- Reply to Comment 1.1.1: Title: Discussion: the scientific way to benchmark multimodal instruction-following models Comment: We thank the reviewer for the insightful comments, and we are happy to discuss the evaluation aspect for multimodal models. For existing benchmarks, they usually focus on a single aspect. However, what is unique to the recent large multimodal models (LMM), like multimodal GPT-4, is that they can perform visual tasks in the wild that require integrated capabilities. For example, in Supplementary Table I, to correctly complete the user’s request, it requires the model to have: (1) OCR capability to understand the captions; (2) visual recognition capability to understand that it is a pan of nuggets that looks like a world map; (3) reasoning capability to combine the information together and answer why this can be interesting. **Having a model that excels in each single aspect, does not necessarily guarantee a capability to combine and reason about them in a single answer**. For example, BLIP-2 is one of the best models that ranks top across the board of academic benchmarks, while lacking the complex reasoning capabilities. Further evidence can be found in Table 5, where BLIP-2 is capable at answering short-form “conversational” questions and fails to tackle more complex tasks. Some recent evaluation benchmarks on LMMs also revealed similar results. At the time of submission, such a benchmark was lacking. This motivates us to construct such a benchmark that requires the model to leverage different capabilities to correctly complete a user’s task. We tried to utilize the resources we have at the time of submission, to construct such a benchmark, with an aim to create scientific and controlled settings. **We do not cherry-pick test samples based on the performance of LLaVA. We do not tune any model/data design choices, based on any of the results we obtain on LLaVA-Bench-In-the-Wild, and we use that solely for benchmark purposes**. However, we fully agree with the reviewer that to have a more comprehensive and complete understanding of the model’s capability, we would need a benchmark at a larger scale, which we were unable to achieve due to the cost limit. Some recent works show a clever way to address this. Instead of directly evaluating with GPT-4, one can use ChatGPT to extract the answers, or key aspects that are required to solve the problem. This can allow the model to still answer with natural sentences, while enabling evaluation at scale. We are happy to see such progress in this field. Of course, despite being cheaper, ChatGPT still incurs a cost. During the rebuttal, we find the largest open-source model, LLaMA-2-Chat-70B, elicits impressive capabilities in following complex instructions like ones we used to query GPT-4 to create LLaVA-Instruct-158K (see response to Q1 of MLCz). It can also be a cost-free alternative. Meanwhile, we believe that it is also important to leverage academic benchmarks, like VQAv2, GQA, etc. The current LLaVA is only trained on natural instruction and responses, making it challenging to evaluate on those standard benchmarks that have ground truth with a single or a few words. Given the large size of these datasets, it is an open research problem to design efficient and effective metrics. We find the recent VisualGPTScore [1] can be considered as an inspiring way to construct a cost-efficient metric. For example, one can evaluate $P(text|image)$ after the model outputs the long-form answer. We thank the reviewer for bringing up this important topic for discussion, and we are happy to discuss and clarify any further questions or doubts. We will include these discussions in our revision, and we believe our draft will be stronger by including the insightful suggestions from the reviewers. [1] Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores. Lin et al. 2023.
Summary: This presents a multi-modal instruction following model and evaluates it. The model is trained by using a frozen vision encoder whose features are used an input to a LLM, which is fine-tuned. It is trained first on simple captioning tasks using a large amount of data, and then on a multi-modal instruction following dataset built by having a LLM generate tasks using the captions and bounding boxes of images. Strengths: - Presents an impressive model that seems to have advanced instruction following abilities for images and text. - Present a detailed set of comparison and evaluation using LLaVA-Bench, and the ScienceQA experiments. I also appreciated the re-use of a qualitative example from the GPT-4 paper. - Some interesting insights, such as using the second to last layer and getting a sense of the benefit of each kind of instruction-tuning task. Weaknesses: - I think it would be valuable to see if this same pipeline could work with an open source LLM, using OpenAI's closed LLMs are not ideal for scientific understanding and reproducibility. I in particular wonder if other LLMs can use the bounding boxes annotation of the images effectively, which I imagine in much harder than the using the captions. - Seeing an evaluation on standard vision/language benchmarks would have been interesting, such as zero-shot VQA. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would the data generation pipeline benefit from even more detail image descriptions? There are other annotation like region captions or visual narratives that could have also been used. I am also curious if the model can handle very low-level detailed questions, like "What color is the rope the man is using?" for Table 3. 278: year -> layer Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have a detailed section in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Does the proposed pipeline work with an open-source LLM?** In our preliminary study, we find that the capability of the teacher is crucial to the quality of the generated instruction-following data (L128-L130). Until the submission deadline, the largest Vicuna model was 13B. Just as the reviewer’s concern, its complex reasoning and spatial reasoning capability is still limited and is behind proprietary models including ChatGPT and GPT-4. However, the recently released LLaMA-2-70B-Chat appears to have narrowed the gap. Due to the large size of the model, it requires a huge amount of VRAM and has a slow inference speed. We conducted a preliminary study on around 200 samples for each category. Specifically, we generate 200 samples for each category (conversation, detailed description, complex reasoning), using LLaMA-2-70B-Chat, ChatGPT, and GPT-4. After generating the response, we find that unlike previous open-source models, LLaMA-2-70B-Chat can start to follow complex instructions like creating multimodal instructions. However, it still fails in the conversation category, as we find that the LLaMA-2-70B-Chat is not correctly following the conversation format. This may be potentially fixed with more sophisticated prompt tuning. However, due to the limited rebuttal period, we do not evaluate on the conversation category. This is also one of the main limitations we find of LLaMA-2-70B-Chat. We then quantitatively evaluate the generated instructions using GPT-4 as the judge: (1) the correctness of the answers generated, and (2) the complexity of the instructions generated for complex reasoning questions. | | Correctness | Complexity | |----------|-------------|------------| | LLaMA-2-70B-Chat | 8.7 | 7.4 | | ChatGPT | 9.5 | 9.2 | These initial results are promising, and suggest that our pipeline can be potentially applied to open-source models as their capabilities are improved. We look for more comprehensive studies and deeper explorations for future research. **Q2. Zero-shot VQA** We evaluate LLaVA-13B on VQA-v2 and OK-VQA. Note that for OKVQA, we use a slightly relaxed evaluation protocol for evaluation. Since LLaVA typically outputs a short sentence rather than one or two words, if the generated sentence by LLaVA contains the ground truth answer, we consider it a correct prediction. We find that although LLaVA lags behinds on VQA-v2 (a task where answers can be directly derived from the image), LLaVA performs surprisingly well on OK-VQA (a task where answers require strong knowledge and reasoning) — LLaVA outperforms Flamingo-80B on zero-shot OK-VQA. This is because (1) LLaVA’s advantage lies in its capable LLM; (2) all of the instructions in LLaVA are prompting the model to output a complete sentence, and it thus struggles a bit on those standard benchmarks, which requires answers of one or two words. We believe the latter issue can be alleviated by incorporating short-form answers into the instruction tuning data or improving the multimodal in-context learning capabilities, and we leave them to future work. | Models | VQAv2 | OKVQA | |--------------|-------|-------| | PICa (in-context few-shot prompting GPT3-175B) [1] | -- |48.0 | | Flamingo-80B [2] | 56.3 | 50.6 | | LLaVA-13B | 44.2 | 55.0 | [1] PICa: An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA [2] Flamingo: a visual language model for few-shot learning **Q3. Will the data generation pipeline benefit from even more detailed image descriptions?** This is a great suggestion. As we show in Table 4, adding detailed image description to conversation data can improve the model’s capability. We believe that more detailed annotations can further improve the detailedness and the quality of the generated instruction dataset, ultimately resulting in an improved model. Annotations, such as region captions, visual narratives, or scene graphs, are definitely valuable for future research. **Q4. Can LLaVA handle questions about extremely low-level details?** Currently, LLaVA is not able to handle low-level details such as the color of the rope, as the rope is currently very thin and is covering only a few pixels after center cropped and resized to 224x224. However, given that LLaVA is capable of correctly identifying the color of those slightly larger regions (e.g. the clothes that the man is ironing), we believe that scaling up the image resolution, and/or incorporating more detailed descriptions, as mentioned by the reviewer in the comment above, could unlock LLaVA’s low-level detail recognition/reasoning capabilities. > 278: year -> layer Thanks for pointing out the typo. We’ll fix it in the revision. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we would like to thank you for your insightful feedback. We hope that your questions are addressed with our rebuttal. Please let us know if there are any further questions that need clarification. --- Rebuttal Comment 1.2: Comment: Thank you for answering my questions, I think discussion about the weaknesses of these systems is also very valuable. I continue to be very positive about this paper.
Summary: This paper studies instruction tuning in the multimodal domain. Instruction tuning has recently drawn a lot of attractions in the large language model (LLM) field, and hence it is interesting and important to study similar capabilities in multimodal models. This paper is a pioneer work in this direction. It constructs the first instruction tuning dataset LLaVA and conducts comprehensive analysis. This is an important step towards general-purpose multimodal language model (MMLM) that can interact with humans using natural languages. Strengths: 1. To the best of my knowledge, this is the first work of multimodal instruction tuning. This is an important direction and hence the impact of this paper is huge. 2. The dataset constructed in this paper is both useful and inspiring. It constructed an impactful first step for future studies in this field. 3. The paper is well written and contains a lot of detailed studies. Weaknesses: I don't see any major weakness of the paper. There are some minor ones that can be improved but I understand they may go beyond the scope of this paper. For example, it will be more comprehensive to conduct ablations on the models used in this paper other than Vicuna etc. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What did you use 2e-3 lr for pretraining? This seems to be a very large value given the 128 batch size. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Ablations on other LLMs other than Vicuna.** Until the paper submission deadline, Vicuna is the most adopted open-source instruction-tuned LLM. There are other great instruction-tuned LLMs coming out after that, including MPT, LLaMA-2-Chat, etc. We present initial studies using these other LLMs on LLaVA-Bench-In-the-Wild below. We also include the performance of the base LLM measured by MMLU and MTBench from the Vicuna leaderboard. | BaseLLM | MMLU | MTBench | LLaVA-Bench-In-the-Wild | |------------|------|---------|----------| | MPT-7B | 32 | 5.42 | 53.6 | | Vicuna-7B | 49.8 | 6.17 | 63.3 | | LLaMA-2-7B-Chat | 45.8 | 6.27 | 63.2 | This initial study exhibits a correlation of the capability of the base LLM and the performance of the resulting multimodal model performance. It will be interesting to dig further into the relationship between the base LLMs and multimodal models. It is worth mentioning that MPT-7B and Vicuna-7B is instruction-tuned with supervised finetuning, while LLaMA-2-7B-Chat is additionally finetuned with RLHF. It will also be interesting to see the influence of RLHF in terms of the multimodal capabilities. We believe this initial study will be valuable to the research community to better understand the mechanism and capability of the multimodal models. We will release these results and corresponding model checkpoints to the public. **Q2. Why do we use a 2e-3 learning rate for pretraining?** Since we pretrain our model on a small 595K dataset, there are only around 4.5K iterations during training. Given the few steps the model is optimized for, we empirically find a larger 2e-3 learning rate is slightly better for fast convergence than lower learning rates like 2e-4. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we would like to thank you for your insightful feedback. We hope that your questions are addressed with our rebuttal. Please let us know if there are any further questions that need clarification.
Summary: This paper introduced LLaVA, an effective visual instruction tuning method to turn Large Language Models (LLMs) into multi-modal LLMs. LLaVA is first pre-trained on image-text pairs to connect a visual encoder (CLIP) and a LLM (Vicuna). Then the authors utilize GPT4 to generate ~150K visual instruction data for training visual instruction-following models. LLaVA is evaluated on two diverse and challenging benchmarks, as well as a science question answering dataset. Overall, LLaVA is a very early attempt for enhancing LLMs with multi-modal capacity. I believe it will inspire a lot to the research community. Strengths: - Expand instruction tuning to the vision domain. Visual instruction tuning is a new research problem for vision-language models. It endows vision-language models with powerful comprehension and reasoning capabilities. - A new pipeline for visual instruction data generation. LLaVA takes image captioning or object detection results as GPT4’s input for visual instruction generation. This is an effective way to quickly generate a large amount of visual instruction data. - A strong visual instruction model LLaVA with available pretrained models and demos. - Multi-modal instruction-following benchmark. Weaknesses: - Data quality. The proposed visual instruction data is automatically generated by GPT4. But there seems to be a lack of validation of the data quality. For example, MiniGPT4 [1] will manually verify the correctness of each image description. High quality instruction data is also proved to be important for pure LLMs in LIMA [2]. - Is 595K image-text data enough for vision-language alignment? Blip2 uses >100M image-text pairs for vision-language alignment, while LLaVA only uses 595K data from CC3M. Have the authors tried to use more pre-training data, and will the model be further improved? Besides, I noticed that LLaVA inputs 256 CLIP visual tokens into the LLM, which is much large than Blip2 and MiniGPT4 (~30 tokens). Such a design will make the training much slower. So, do we really need 256 tokens? - The LLM in LLaVA is fully fine-tuned in the second stage. Will this lead to degradation of LLM's ability? Are there verification results on traditional LM tasks? [1] Enhancing Vision-Language Understanding with Advanced Large Language Models [2] LIMA: Less Is More for Alignment. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See Weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. The authors have addressed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. Data Quality** We agree that high quality instruction data is critical and have taken measures to ensure the data quality. First, we create image descriptions directly from the well-established manually-annotated MSCOCO dataset, which contains bounding box and caption annotations (L103-L110). This ensures the quality of the visual groundness of our textural context input to GPT-4. Second, we perform text-based filtering to remove invalid responses: (1) incomplete responses; (2) GPT refuses to provide the answer; (3) contain words that make the answer not sound like it is looking at the images (e.g. according to the captions). We’ll include the detailed list of keywords in the revised appendix. Third, we iterate and validate our prompts on a subset of around 1000 samples, to validate the visual groundness of the generated outputs using GPT-4, and find that GPT-4 consistently provides higher quality instruction-following data (L128-L130). Finally, we ablate using LLaVA-Bench-COCO on the combination of different types of generated instruction-following data (Table 4, L217-L227). Note that MiniGPT-4 is a concurrent work to LLaVA, while we are more than happy to discuss. > *From the MiniGPT-4 paper: ...we check if each generated image description **follows our desired format**, and also manually refine the generated captions by **eliminating redundant words or sentences** ....* According to the MiniGPT-4 paper, the manual check is only performed to correct **textual format** errors, without mentioning checking the visual groundness of the generated responses. Furthermore, the source image description of MiniGPT-4 is generated by the first-stage MiniGPT-4. Besides the textual errors that are mentioned in the paper, the correctness and the visual groundness of the generated descriptions of MiniGPT-4 is unclear. We believe that our data is more large-scale, diverse, content-rich, and the quality is more controlled. **Q2. Is 595K image-text pairs enough for vision-language alignment?** Since CC3M only has around 2M images available to download from the Internet, we choose the BLIP-captioned LAION-CC-SBU dataset (which is the training dataset of BLIP2). Due to the limitation of both resources and time during the rebuttal period, we have tried two subsets: 600K samples and 6M samples. We ablate this with Vicuna-13B using the same schedule as described in the paper, and evaluate on LLaVA-Bench-In-the-Wild. As shown in the paper, when scaling up the pretraining dataset from 600K to 6M, the overall performance on LLaVA-Bench-In-the-Wild does not vary too much (66.8 vs 66.5). | Pretrain Samples | Conversation | Detail | Complex | All | |--|--|--|--|--| | 600K | 56.7±3.9 | 54.2±3.1 | 80.2±1.5 | 66.8±0.8 | | 6M | 55.1±3.9 | 56.3±2.6 | 79.6±2.9 | 66.5±0.5 | We believe the fast alignment of LLaVA can mainly be attributed to two reasons. First, our vision encoder, CLIP, was pretrained with image-text contrastive loss, and its visual feature is thus already aligned to a text space. It is sufficient to re-align this to a different text space using a linear layer. Second, it is much easier and requires fewer samples to optimize the linear layer (5.2M parameters) than the QFormer from BLIP2, which contains 1.1B parameters, orders of magnitude more than LLaVA’s alignment-stage trainable parameters. We thank the reviewer for bringing up this topic, and will include this discussion in the revision. **Q3. LLaVA inputs 256 CLIP visual tokens into the LLM, which is much larger than Blip2 and MiniGPT4 (~30 tokens). Such a design will make the training much slower. So, do we really need 256 tokens?** We are happy to compare with the concurrent work, MiniGPT-4. *First, will this make the training slower?* Since LLaVA uses 256 tokens, it is around 4-5x slower than MiniGPT-4 per training iteration. However, since we only need 600K samples to converge, the total pretraining cost of LLaVA is 4 hours on 8x A100s (Supp. L62). MiniGPT-4 pretrains with ~6M image-text pairs and requires training approximately 10 hours on 4x A100s (which roughly equates to 5 hours on 8x A100s). When considering the total training time, LLaVA is slightly faster. *Second, do we really need 256 tokens?* This is an interesting research question open to discussion. Compressing 256 tokens to 32 tokens is a process of information compression. We find that this is detrimental in terms of OCR capability, which is an interesting emergent capability of LLaVA. For example, on a suite of 27 text recognition related academic datasets, LLaVA consistently outperforms MiniGPT4 on 23 out of 27 datasets, despite LLaVA being trained with an order of magnitude smaller image-text training data. We also qualitatively show in Fig. 2 of Rebuttal Supplementary, that such compression process may discard information that the user is curious about: LLaVA recognizes the website that the image comes from by reading the text from the watermark, while MiniGPT-4 fails. Furthermore, having finer patch-level features can allow the model to perform region-level reasoning easier, as the region-level information is better preserved and readily extractable for downstream models. **Q4. Does full-model finetuning lead to the degradation of LLM’s ability?** We show that LLaVA and Vicuna are comparable on MTBench[2], and LLaVA is only slightly worse (-0.8%) on MMLU[1]. | | MTBench | MMLU | |---|---|--| | Vicuna-13B | 6.57 | 55.8 | | LLaVA-13B | 6.63 | 55.0 | We find this result encouraging, and this can be partially attributed to the inclusion of complex reasoning questions, and long-form answers in LLaVA-Instruct-158K, which helps maintain the language capabilities of LLaVA. We also show that on ScienceQA, it even slightly outperforms Vicuna, on text-only categories. See detail in R4 to reviewer mhaS. [1] Hendrycks, et al. "Measuring massive multitask language understanding.". [2] Zheng, et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.". --- Rebuttal Comment 1.1: Comment: Dear reviewer, we would like to thank you for your insightful feedback. We hope that your concerns are addressed with our rebuttal. Please let us know if there are any further questions that need clarification.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and their thoughtful comments and questions. We are encouraged that the reviewers find that: - Our work is a pioneer in the multimodal instruction tuning field (RuBm, MLCz, A2dU, mhaS). It will inspire a lot to the research community (RuBm) and have a huge impact on this field (MLCz). - We have made significant contributions, including - an inspiring pipeline for multimodal instruction data generation (RuBm,MLCz) - one of the first large-scale vision-language instruction-following datasets (mhaS) and a multimodal instruction-following benchmark (RuBm) - LLaVA, a strong visual instruction model (RuBm) with elegant designs (mhaS) and impressive instruction following capabilities for images and text (A2dU). - fully open-sourced assets that are undeniably valuable to the multimodal research community (mhaS). - The paper is well written (MLCz), contains some interesting insights (A2dU), and has a detailed set of comparisons and evaluations (MLCz,A2dU). We attempted our best to address the questions as time allowed. We believe the comments & revisions have made the paper stronger and thank all the reviewers for their help. Please find individual responses to your questions below. Pdf: /pdf/f1c187bca8c1cd872ad932bb520a3de758e955fa.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GAUCHE: A Library for Gaussian Processes in Chemistry
Accept (poster)
Summary: Gaussian processes are widely used for black-box optimization when data is scarce. On the other hand, effectively representing molecules, proteins, and chemical reactions is a dedicated research area in molecular machine learning. Although separate tools exist to address these two challenges, this paper introduces a novel library that integrates both. By doing so, it enables chemists without extensive knowledge of Bayesian optimization to harness its benefits. Strengths: The library's objective is clearly defined and holds potential for the chemistry and machine learning research communities. The persuasive comparison with existing work in Section 5 strengthens its position. Sections 2.3 and 2.4 provide thorough explanations of molecular and chemistry reactions. Weaknesses: I find the objective of the paper somewhat unclear. Here is my understanding and suggested improvements: * The paper's main aim, as summarized in Section 5, is to create a unified library called GAUCHE that combines chemistry libraries (for molecule/chemical reaction representation) with Bayesian optimization libraries. Such libraries exist separately, GAUCHE aims to unify them. * Sections 2 and 3 effectively describe the library's provided representations and kernels. * However, I fail to grasp the purpose of the experiments described in Section 4. The message seems to be that "the library works and enables Bayesian optimization on various chemistry benchmarks." This leaves room for improvement: what sets GAUCHE apart? Why should users choose it over manually combining a molecular representation with a GP library? Section 5 partially answers these questions – and should be presented earlier in the paper, in my opinion. * Overall, it appears that the library does not offer new theoretical insights or algorithms. While this is not necessarily a problem (as GAUCHE fills a relevant niche, allowing non-GP-expert chemists to utilize state-of-the-art black-box optimization for chemical reactions), **the paper should clearly emphasize the library's strengths**. These may include (a) ease of use, (b) a modular code structure enabling user extension, or (c) superior performance on public benchmarks. In that spirit, a helpful addition would be a table comparing different libraries, with the different libraries (containing GAUCHE) in rows and the different features (GP, molecular representation, etc) in columns, and cells indicating whether the feature is implemented or not. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * `l. 60`, the covariance $\sigma_y$ is not introduced. I understand that this assumes a Bayesian linear model of the form $f({\bf x}) = \phi({\bf x})^\top {\bf w} + \sigma_y \epsilon$ where $\epsilon$ follows a standard normal distribution and $K({\bf x}, {\bf x}') = \phi({\bf x})^\top \phi ({\bf x}')$, as in [1, Eq. 2.21]. I am not familiar with the BO literature, and perhaps it's the only model considered, but it would be worth mentioning that you are considering a Bayesian linear model, along with a proper citation to justify the formula. * `l. 68`: again, adding a reference e.g. [1, Eq. 5.8] seems necessary to justify the NLML formula. * Fig. 1 is not informative for someone who does not have GP or chemistry background. After reading sec. 2.{3, 4, 5}, we understand that the 3rd row show the possible representation for the different applications considered in the 2nd row (even though "SMARTS" in not described in sec. 2. 4). In any case, the figure would benefit from a more detailed caption, so that the message it is trying to convey appears clearly. * Table 1 and 2: why the font is so small? It would be convenient to have the same font size as the rest of the document. * `l. 227`, I don't know the 3 metrics for measuring the quality of the uncertainty estimates. A word of introduction explaining what they measure, what are their strength and limitation would be welcomed here. **Related literature.** * You may be interested in [2] which provides an convolutional kernel network for graph-structured data. **Typos.** * `l. 55`: $m(\mathbf{x'})$ instead of $m(\mathbf{x})$ * `l. 80`: extra "the" **References.** [1] Rasmussen and Williams - 2006 - Gaussian processes for machine learning [2] Convolutional Kernel Networks for Graph-Structured Data – 2020 – Dexiong Chen, Laurent Jacob, Julien Mairal Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The limitations identified by the authors regarding specific algorithms for certain problems are noteworthy but can be left for future research. Aside from that, my main concerns are summarized in the "Weaknesses" section mentioned previously. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript and for providing detailed, helpful and constructive feedback. We were happy to see that you appreciate the practical usefulness of a well-designed and easy-to-use library that enables scientific experts to make use of Bayesian optimisation in low-data regimes. The main concern you raised in your review relates to the way we present the objectives and strengths of our work and we sincerely appreciate the feedback. We will try to address your points below. &nbsp; ## __Clarifying the Added Value of GAUCHE__ ## &nbsp; > The paper’s main aim, as summarized in Section 5, is to create a unified library called GAUCHE that combines chemistry libraries (for molecule/chemical reaction representation) with Bayesian optimization libraries. Such libraries exist separately, GAUCHE aims to unify them. &nbsp; As you correctly stated, our main aim is to create an open and unified community resource that makes it as easy as possible to combine current (and future) state-of-the-art molecular representations and kernels with Gaussian Process and Bayesian Optimization libraries. The central motivation behind this approach is to create a public repository that enables expert chemists and materials scientists with little background in GPs or BO to make use of state-of-the-art black-box optimization techniques, as well as to minimize the time and effort spent on re-implementing redundant optimization pipelines. &nbsp; > However, I fail to grasp the purpose of the experiments described in Section 4. (…) What sets GAUCHE apart? Why should users choose it over manually combining a molecular representation with a GP library? &nbsp; The purpose of the experiments in Section 4 (and the corresponding Jupyter Notebook tutorials) is to provide prospective users with a convincing demonstration that GAUCHE presents a modular and user-friendly platform for the rapid exploration and prototyping of different molecular representations and similarity kernels to establish which setup (if any) works best for a given application. &nbsp; > (…) the paper should clearly emphasize the library’s strengths. These may include (a) ease of use, (b) a modular code structure enabling user extension, or (c) superior performance on public benchmarks. &nbsp; We will make sure to refine these points in Section 5 and to also clearly state them in both the Abstract and Introduction of the paper. &nbsp; > In that spirit, a helpful addition would be a table comparing different libraries (…) &nbsp; Thank you for the suggestion. We agree that clearly stating the value that GAUCHE adds over existing libraries by summarizing Section 5 into an intuitive table would help to further clarify the points we make above. We have included a draft of this table below and a more polished version in the pdf submitted with our general response. &nbsp; | Library | Gaussian Processes | Bayesian Optimisation | Molecular Representations | Chemistry Tutorials | Graph Kernels | Bit Vector Kernels | String Kernels | |----------|--------------------|-----------------------|---------------------------|---------------------|---------------|--------------------|----------------| | GPyTorch | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | GPflow | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | BoTorch | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | DeepChem | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | | GraKel | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | FlowMO | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | | GAUCHE | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | &nbsp; ## __Questions__ ## &nbsp; 1. Great spot on the $\sigma_y$ term. This indeed represents a coefficient for additive Gaussian noise and we will update the manuscript to include this definition. 2. Many thanks for the reference! We have opened a GitHub issue to introduce convolutional kernel networks. &nbsp; ## __Summary__ ## &nbsp; We hope that these points clarify the objective of the paper and address the presentational concerns you raised. Please let us know if you have any further questions! &nbsp; Sincerely, The Authors &nbsp; --- Rebuttal Comment 1.1: Comment: Thank to the authors for their rebuttal. The table comparing GAUCHE to other frameworks is compelling. I believe that any libraries that streamline the utilization of advanced machine learning tools are valuable. **I am willing to raise my rating from 5 to 6**. I still have reservations about fully assessing the library's contribution. --- Reply to Comment 1.1.1: Title: On GAUCHE's Contribution Comment: &nbsp; Thank you for the quick response! We are happy to hear that you found the table compelling and that you appreciate the value of a library that lowers the barrier for applying advanced molecular machine learning tools in practical research settings. &nbsp; > I still have reservations about fully assessing the library’s contribution. &nbsp; To address your remaining concern and further clarify the contributions of our work, we would like to expand on the points in our initial rebuttal to: &nbsp; 1. Highlight some of the technical innovations that were necessary to extend the existing open-source GP/BO stack to work for discrete kernels that operate over molecular representations (i.e. bit vectors and graphs). 2. Outline the concrete impact our work has had by listing instances in which GAUCHE has already been applied by other researchers in a range of practical settings. &nbsp; ## __Substantial Technical Contributions__ ## &nbsp; One important and major limitation of existing GP frameworks is that they are built with continuous data in $\mathbb{R}^d$ in mind. For instance, the kernel base class of GPyTorch assumes that custom kernel sub-classes are based on Euclidean distance metrics. In GAUCHE, we provide a parallelizable and batch-GP-compatible alternative to this base class that can be easily extended to implement arbitrary bit and count vector kernels. As another example, we would like to point out that current GPU-enabled GP libraries do not natively support non-tensorial inputs, necessitating a substantial amount of engineering work to extend them to graph-structured input spaces. The resulting SIGP class is, to the best of our knowledge, the first open-source GP implementation that enables GPU-accelerated and autodiff-based end-to-end learning over graph kernel hyperparameters, including all kernels in the GraKel library. While certain kernels - such as the Weisfehler-Lehman (WL) kernel - have concrete feature functions that could be fit directly by a GP, many others - such as the random walk and shortest path kernel - don’t, which was our motivation for designing a more general wrapper framework. &nbsp; ## __Real-World Impact__ ## &nbsp; By combining these technical innovations with a range of easy-to-adapt data loaders, featurization functions and notebook tutorials, we made it as easy as possible to integrate molecular GP and BO models into real-world research workflows. Specifically, we would like to mention that GAUCHE has already successfully contributed to a range of academic and industrial research efforts. We are currently aware of at least three application domains in which GAUCHE has featured as a core component of published work: &nbsp; 1. Additive screening for chemical reaction optimization. 2. Catalyst discovery 3. Self-driving laboratories &nbsp; Additionally, GAUCHE has been a core component in enabling novel Bayesian optimization methodologies to be evaluated on molecular datasets. Published work in this direction has included: &nbsp; 1. The evaluation of a novel multiobjective Bayesian optimization scheme on the task of identifying molecules with favourable cell permeability for drug delivery. 2. The evaluation of a novel method featuring Bayesian quadrature on a) the task of identifying molecules with anti-malarial properties and b) the task of identifying molecules with promising solvation capabilities for use in lithium-ion battery electrolytes. &nbsp; We have consulted the AC and SAC on the best way to provide these references without compromising the double-blind review process and are currently waiting to hear back. &nbsp; ## __Summary__ ## &nbsp; We hope that these clarifications are helpful in allowing you to fully assess the contributions of our work and are more than happy to provide additional details and answer any follow-up questions! &nbsp; Sincerely, The Authors
Summary: The authors discuss Molecular, Reaction, and Protein Representations and provide a unified framework for these models. Python's GPyTorch library is used to train the Gaussian processes. The authors define certain kernels for Gaussian processes to fit and perform several experiments to evaluate the performance. The paper also contains a brief overview of Gaussian processes and Bayesian optimization. Strengths: The authors properly analyze the related kernel functions used in chemistry. Indeed, they provide an overview of the applications and representations available in the GAUCHE library. The library seems easy to follow and can adapt to proteins, molecules, and chemical reactions. Weaknesses: The major weakness is the contribution. The training task uses GPyTorch, a powerful library in Python for Gaussian processes. The main contribution of the paper is to provide some important kernels in chemistry. It seems some of the kernels have been developed in other libraries or their scripts are available on the internet. The theory part of the paper is not rich enough. Section 3 only explains the relevant kernels and does not provide novel ideas or solutions for the methodological challenges. The Gaussian process methods and kernels are already there without new inventions, and the paper does not describe how to adapt them to chemistry problems. It would be great if the authors describe the major benefits of the proposed solution compared to the other libraries related to the Gaussian processes, except the kernels. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the major limitations of the available packages in R/Python so that they can not be used in the chemistry problems and how this package addresses them? If the kernels are omitted, what is the main advantage of the library compared to the others? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript and for providing valuable and helpful feedback. We were happy to see that you appreciated the thorough treatment of the representations, kernels and applications we cover in our work. The main concern you raised in your review is the novelty of our contributions, which we aim to clarify below. &nbsp; ## __Clarifying the Added Value of GAUCHE__ ## &nbsp; > It would be great if the authors describe the major benefits of the proposed solution compared to the other libraries related to the Gaussian processes, except the kernels. &nbsp; The main contribution of our work is to create an open and unified community resource that makes it as easy as possible to combine current (and future) state-of-the-art molecular representations and kernels with existing Gaussian Process and Bayesian Optimization infrastructure. The central motivation behind this approach is to create a public repository that enables expert chemists and materials scientists with little background in GPs or BO to make use of state-of-the-art black-box optimization techniques, as well as to minimize the time and effort spent on re-implementing redundant optimization pipelines. We refer to Section 5 of the manuscript for a thorough review of prior art that aims to convey the added value that GAUCHE provides over existing libraries. For enhanced clarity, we have summarised this comparison as a table in the general comment above as well as in the attached pdf. &nbsp; > The main contribution of the paper is to provide some important kernels in chemistry. It seems some of the kernels have been developed in other libraries or their scripts are available on the internet. (…) Section 3 only explains the relevant kernels and does not provide novel ideas or solutions for the methodological challenges. &nbsp; While it is correct that we mostly build on kernels that are well-established in the literature, we would like to point out that 1. Discrete kernels are not straight-forwardly compatible with the design assumptions of GPyTorch/BOTorch (see below for more), and that a unified and open-source repository of compatible implementations is very helpful 2. That many of these kernels have never been used in the context of GP regression 3. That fewer still have been used for Bayesian optimization - especially in the context of chemistry, materials science and structural biology. &nbsp; We strongly agree that the development of novel and more performant kernels and representations is an important area of future research, but would like to point out that this is an orthogonal gap in the literature to the one we aim to address with this manuscript - though one that could strongly benefit from the robust foundation we aim to provide with GAUCHE. &nbsp; > The Gaussian process methods and kernels are already there without new inventions, and the paper does not describe how to adapt them to chemistry problems. &nbsp; We would like to contest the claim that we do not describe how to adapt our framework to chemistry problems, as we put significant effort into providing a range of easy-to-adapt Jupyter Notebook tutorials that demonstrate how to use GAUCHE for various real-world tasks in medicinal chemistry and reaction optimization. Going even further, we would like to point out that the library has already been used in a range of real-world production settings. We are unsure of how to link to these examples without violating the double-blindness of the review process and are currently liaising with the AC. &nbsp; ## __Questions__ ## &nbsp; > What are the major limitations of the available packages in R/Python so that they can not be used in the chemistry problems and how this package addresses them? &nbsp; One important and major limitation of existing GP frameworks is that they are built with continuous data in $\mathbb{R}^d$ in mind. For instance, the kernel base class of GPyTorch assumes that custom kernel sub-classes are based on Euclidean distance metrics. In GAUCHE, we provide a parallelizable and batch-GP-compatible alternative to this base class that can be easily extended to implement arbitrary bit and count vector kernels. As another example, we would like to point out that the associated GP optimization utilities only work for tensor-valued input spaces and required substantial adjustment to work on graph-structured inputs. &nbsp; ## __Summary__ ## &nbsp; We hope that these points clarify the objective of the paper and address the presentational concerns you raised. Please let us know if you have any further questions! &nbsp; Sincerely, The Authors &nbsp; --- Rebuttal Comment 1.1: Comment: Thanks a lot for the author's comprehension answers to the concerns and questions. I checked the rebuttal and also other reviewers' comments. Also, the list of papers that used this library was useful. Although I still have concerns about the contribution and theoretical novelty of this work, I want to raise my rating from 4 to 5. --- Reply to Comment 1.1.1: Title: Thank You for the Response and Feedback Comment: &nbsp; Many thanks once again for taking the time to review the paper and offer feedback! &nbsp; Sincerely, The Authors
Summary: This article presents a library for Gaussian process-based inference with a special focus on chemistry applications. At heart, the library contains two classes of objects: kernel, and data loaders. The article introduces Gaussian processes and chemistry-specific kernels and discusses the interfacing of the library with other frameworks, in particular relating to Bayesian optimisation. Strengths: The proposed library seems to fill a gap in the open-source GP stack, providing specialised building blocks for use in chemistry. The code cleanliness is rather high and the code is well-tested. The paper is fairly clear (see however my questions in the weaknesses), and provides a reasonable introduction to the problem and to the GP literature around it, some illustration of the superior performance of GPs (compared to Grakel) is adequately shown. The relation to prior works and existing solutions is also well documented. Weaknesses: As it stands, there is no clear indication in the article that the library has had any impact on the practice of data-driven chemistry. If possible, the authors should include examples of real-life uses of GAUCHE, and if there is none, it then would mean that the library is likely not mature enough to warrant a full-size article (although it would likely be a good workshop paper). I have found it hard to understand which "20+ bespoke" kernels were in fact implemented. For instance, the graph kernels are described in the article, but the code provided shows nothing under "gauche/kernels/graph_kernels". Similarly, I cannot count 20 kernels in the code and would like for the authors to specifically list these. The future of the library is also not mentioned: what is the governance model and what are the next steps? Is it to implement more kernels? When new kernels are implemented, then what? As it stands it seems that the authors suggest they will "wait and see", depending on the feedback they receive from the practitioners. This makes me believe that publishing an article on the library itself is then fairly premature. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: As discussed in the weaknesses, my main reasons for the negative rating are (i) that the supported methods are not well documented within the article (ii) the future of the library is really unclear: for instance > We seek to further grow our userbase and solicit feedback from laboratory practitioners on the most common use-cases for BO and GP modelling in molecular discovery campaigns. is an admission of stale development and of an unclear vision for the future of the library. Some clarity on this would be much appreciated. A remark on my confidence level: I am by no means in the position to judge the relevancy of introducing a package for chemistry-specialised kernels and do not really understand the end applications. I will therefore fully defer to other reviewers when it comes to this. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript and for providing helpful and constructive feedback. We were happy to see you emphasize how our library complements the current open-source molecular machine learning stack and acknowledge the high quality of our code and tests. The concerns you raised in your review relate to examples of real-life use-cases, our open-source governance model, and the claims we make regarding the number of available kernels. We will address each of these points in turn below. &nbsp; ## __Examples of Real-World Use-Cases__ ## &nbsp; > If possible, the authors should include examples of real-life uses of GAUCHE &nbsp; This is a great point and central to our motives for introducing GAUCHE. We are eager to share at least four instances (for which public references are available) in which GAUCHE has been utilized in real-world research and production settings. As stated in the general comment, we will liaise with the AC/SAC to determine the best way to provide these references! &nbsp; ## __Plans for Future Development__ ## &nbsp; > the future of the library is really unclear &nbsp; Pending the advice of the AC on the double-blind policy, if we are able to share some recent applications of GAUCHE this should give some flavour of the user-inspired extensions we have recently implemented! Additionally, in the attached pdf we have included a new SOTA performance on the photoswitch benchmark motivated by **reviewer rK4e** suggestion to investigate additional fingerprints/descriptors. &nbsp; ## __Number of Available Kernels__ ## &nbsp; > I have found it hard to understand which "20+ bespoke" kernels were in fact implemented. &nbsp; For the graph kernels alone, the total count subsumes the total number of kernels available in the GraKel library (18). In the "**external_graph_kernels.ipynb**" notebook we showcase the application of our wrapper around GraKel, the SIGP class which allows any kernel from GraKel to be used as a component of a PyTorch-based Gaussian process. Since GPU-enabled GP libraries do not natively support non-tensorial inputs, the SIGP class required a substantial amount of engineering work, but enables autodiff-based, end-to-end learning over the kernel hyperparameters of the GraKel library. Certain kernels such as the Weisfehler-Lehman (WL) kernel have concrete feature functions, and so a WL kernel-GP can be implemented by fitting a linear kernel to those features. However, given that many of GraKel's kernels do not possess distinctive feature functions, we decided to implement a more general wrapper framework. Additionally, we are in the process of integrating an additional suite of bit vector kernels in the coming week. &nbsp; ## __Summary__ ## &nbsp; We hope that our additional clarifications and discussion address all of your questions and concerns. Please let us know if you have any further questions! &nbsp; Sincerely, The Authors &nbsp; --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: Thank you for the clear response (both to me and other reviewers). Because sharing the examples is pending AC approval/guidelines, I will refrain from updating my score just yet. As it stands I am however inclined to increase it substantially (subject to the examples provided) > For the graph kernels alone, the total count subsumes the total number of kernels available in the GraKel library (18) I believe this should be very clearly stated rather than the fact that you provide 22+ kernels. The fact that your library immediately extends another one with no real overhead is a good thing. > Examples of Real-World Use-Cases I am eager to see these. > Plans for Future Development I would highly recommend making the short and long-term plans clearer in the conclusion of the article: if anything to prevent the kind of reaction I personally had thinking it was "an admission of stale development and of an unclear vision for the future of the library". --- Reply to Comment 1.1.1: Title: Many Thanks for the Prompt Response and Additional Feedback! Comment: &nbsp; Thank you for your response! We are happy to hear that you found our clarifications helpful and sincerely appreciate the additional feedback. &nbsp; ## __Additional Kernels__ ## &nbsp; > I believe this should be very clearly stated rather than the fact that you provide 22+ kernels. The fact that your library immediately extends another one with no real overhead is a good thing. &nbsp; Thank you. We agree that this should indeed be stated more clearly in the paper and we will ensure that the graph kernel section (3.3) is adjusted accordingly. Motivated by your comment, we have additionally implemented 12 new bit vector kernels based on the following similarity measures from [1] and [2]: &nbsp; 1. Dice 2. MinMax 3. Sokal-Sneath 4. Russell and Rao 5. Sogenfrei 6. Forbes 7. Intersection 8. Faith 9. Otsuka 10. Rogers-Tanimoto 11. Braun-Blanquet 12. Rand &nbsp; All of these measures yield provably symmetric positive-definite kernels [3] and we provide parallelisable and batch-GP-compatible implementations with associated unit tests. We have also evaluated each of these kernels on the Photoswitch dataset, finding that they often slightly outperform the more standard Jaccard-Tanimoto kernel - particularly in the case of the Sorgenfrei kernel (number 5). &nbsp; ## __Real-World Use-Cases__ ## &nbsp; We are still waiting to hear back from the AC/SAC regarding the best way to share external references and will follow up with them shortly. &nbsp; > I am eager to see these. &nbsp; To provide you with as much information as possible whilst we wait, we think that it is safe to share the specific application domains in which GAUCHE has featured as a core component of published work. These include: &nbsp; 1. Additive screening for chemical reaction optimization. 2. Discovery and optimization of novel catalysts. 3. Self-driving laboratories and computational experiment planning. &nbsp; Additionally, GAUCHE has been a core component in enabling novel Bayesian optimization methodologies to be evaluated on molecular datasets. Published work in this direction includes: &nbsp; 1. The evaluation of a novel multiobjective Bayesian optimization scheme on the task of identifying molecules with favourable cell permeability for drug delivery. 2. The evaluation of a novel method featuring Bayesian quadrature on a) the task of identifying molecules with anti-malarial properties and b) the task of identifying molecules with promising solvation capabilities for use in lithium-ion battery electrolytes. &nbsp; We hope that these additional elaborations are helpful and provide useful context to inform your decision-making process. &nbsp; ## __Governance Model__ ## &nbsp; > I would highly recommend making the short and long-term plans clearer in the conclusion of the article &nbsp; Thank you. We agree that the conclusion should more clearly convey our short-term development and long-term governance plans and will adjust it accordingly. Regarding the latter, our aim is to maintain a lean, well-tested, and up-to-date main codebase. Following the maintenance model of GPflow, which has proved successful, we aim to invite community-driven contributions principally as PRs in the form of notebooks (as opposed to extensions to the main codebase) that reflect the needs and considerations that researchers come across in practice. In this fashion, we may support more advanced features without bloating the codebase and increasing maintenance requirements. &nbsp; We thank you again for your feedback and are happy to answer any further questions! &nbsp; Sincerely, The Authors &nbsp; ## __References__ ## &nbsp; [1] Choi, Seung-Seok, Sung-Hyuk Cha, and Charles C. Tappert. “A Survey of Binary Similarity and Distance Measures.” Journal of Systemics, Cybernetics and Informatics 8.1 (2010): 43-48. [2] Todeschini, R., D. Ballabio, and V. Consonni. “Distances and Similarity Measures in Chemometrics and Chemoinformatics.” Encyclopedia of Analytical Chemistry. RA Meyers, 2020. 1-40. [3] Nader, Rafic, et al. “On the Positive Semi-Definite Property of Similarity Matrices.” Theoretical Computer Science 755 (2019): 13-28. &nbsp;
Summary: This paper presents a library for Gaussian processes on chemistry data. In this library, a number of kernels are implemented over chemical representations such as graphs, strings and bit vectors. Regression and Bayesian optimization experiments are shown using the library. Strengths: - GP has been widely used as a ML tool for Chemistry. - A reliable library can lower the barrier for chemistry experts on using ML tools. Weaknesses: - All of the implemented kernels are known in the literature. - Neurips is probably not the right venue for publishing software libraries. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Neurips is probably not the right venue for publishing software libraries. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation of the proposed method has been explicitly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript. We were happy to see you emphasize both the general usefulness of Gaussian Process models as robust molecular machine learning tools, as well as the practical impact that a well-designed and easy-to-use library can have by making them more accessible to scientific experts in chemistry and materials science. The concerns you raised in your review relate to the suitability of NeurIPS as a venue for software library papers and the originality of the kernels we have implemented. We will address each of these points in turn below. &nbsp; ## __Publishing Software Library Papers at NeurIPS__ ## &nbsp; > Neurips is probably not the right venue for publishing software libraries. &nbsp; Our interpretation of the NeurIPS 2023 Call For Papers, which explicitly calls for “libraries” in the Infrastructure section, as well as a call for manuscripts on “Machine learning for sciences”, leads us to believe that our contribution falls within the remit of the current iteration of NeurIPS, albeit we acknowledge that the aforementioned calls are a new addition in recent years. We refer to [1-3] as examples of software libraries that were recently published at NeurIPS. &nbsp; ## __Originality of our Kernels__ ## &nbsp; > All of the implemented kernels are known in the literature. &nbsp; Building on the previous point, we would like to emphasize that our main contribution is the provision of a robust and easy-to-use library to make Gaussian Processes more easily accessible to expert chemists and materials scientists. While we strongly believe that the development of novel and more performant kernels is an important area of future research, we would like to point out that this is an orthogonal gap in the literature to the one we aim to address with this manuscript - though one that could strongly benefit from the robust foundation we aim to provide with GAUCHE. &nbsp; We hope that these additional clarifications address all of your questions and concerns. Please let us know if you have any further questions! &nbsp; Sincerely, The Authors &nbsp; ## __References__ ## &nbsp; [1] Jamasb, Arian, et al. “Graphein-a python library for geometric deep learning and network analysis on biomolecular structures and interaction networks.” Advances in Neural Information Processing Systems 35 (2022): 27153-27167. [2] Pineda, Luis, et al. “Theseus: A library for differentiable nonlinear optimization.” Advances in Neural Information Processing Systems 35 (2022): 3801-3818. [3] Feydy, Jean, et al. “Fast geometric learning with symbolic matrices.” Advances in Neural Information Processing Systems 33 (2020): 14448-14462. &nbsp;
Rebuttal 1: Rebuttal: &nbsp; ## __Overview__ ## &nbsp; We would like to thank all reviewers for the time and effort put into reviewing our manuscript and for the valuable and constructive feedback they have provided. We are delighted that all reviewers recognized the practical significance of our work, highlighting that GAUCHE “fills a gap in the open-source GP stack” (**reviewer oVt5**) and provides a “strong theoretical framework with rigorous empirical evaluation” (**reviewer rK4e**) that “allows non-GP-expert chemists to utilize state-of-the-art black-box optimization tools” (**reviewers JWPZ and jdZg**) and is able to “adapt to proteins, molecules, and chemical reactions” (**reviewer rHxF**). We are also happy to see that reviewers appreciated our scope, noting that we “provide thorough explanations of molecular and chemical reaction” tasks (**reviewer JWPZ**), “thoroughly examine different ways of representing molecular structures” (**reviewer rK4e**) and “properly analyze the related kernel functions” (**reviewer rHxF**), which we evaluate “across several diverse benchmarks involving regression tasks, uncertainty quantification, and Bayesian optimization (BO)” (**reviewer rK4e**). Finally, we were pleased to hear that reviewers appreciated our engineering effort, noting that we provide “a very well designed codebase” (**reviewer rK4e**) that “seems easy to follow and adapt” (**reviewer rHxF**), further emphasizing that “the code cleanliness is rather high and the code is well-tested” (**reviewer oVt5**) and that we “ensure reproducibility by providing clear explanations” and tutorials (**reviewer rK4e**). &nbsp; ## __Summary of Concerns__ ## &nbsp; The main concerns raised by reviewers related to: &nbsp; 1. The suitability of NeurIPS as a venue for publishing software libraries (**reviewer jdZg**), 2. The need for a clearer emphasis of the library’s strengths (**reviewers JWPZ and rHxF**) and 3. Examples of real-world case studies that use GAUCHE (**reviewer oVt5**). &nbsp; **(1)** We have addressed the first point by referring to the NeurIPS Call For Papers - which explicitly invites contributions on “libraries” (and “machine learning for sciences”) - as well as a range of software libraries that were recently published at NeurIPS [1-3]. **(2)** In response to the feedback we received regarding the second point, we will further refine our abstract, introduction and discussion of prior art to more clearly highlight the strengths and added value that GAUCHE provides (namely ease-of-use, modularity and robust empirical performance). As requested by reviewers, we have also summarized the advantages our library provides over existing packages in the table below, which we will add to Section 5. &nbsp; | Library | Gaussian Processes | Bayesian Optimisation | Molecular Representations | Chemistry Tutorials | Graph Kernels | Bit Vector Kernels | String Kernels | |----------|--------------------|-----------------------|---------------------------|---------------------|---------------|--------------------|----------------| | GPyTorch | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | GPflow | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | | BoTorch | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | | DeepChem | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | | GraKel | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | | FlowMO | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | | GAUCHE | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | &nbsp; **(3)** In response to the request for real-world case studies, we are eager to share at least four instances (for which public references are available) in which GAUCHE has been used by other researchers in real-world research and production settings. As the review guidelines state that we should not include any links to external pages, we will liaise with the AC/SAC to determine the best way to provide these references. We have added additional discussions and clarifications of these and all other points raised by the reviewers under the respective reviews. Additionally, our attached rebuttal document includes a more polished version of the markdown table above as well as new results attaining a new **SOTA on the photoswitch benchmark** following the suggestions of **reviewer rK4e**. We hope that our response addresses all reviewer questions and concerns and are happy to answer any further questions! &nbsp; Sincerely, The Authors &nbsp; ## __References__ ## &nbsp; [1] Jamasb, Arian, et al. “Graphein-a python library for geometric deep learning and network analysis on biomolecular structures and interaction networks.” Advances in Neural Information Processing Systems 35 (2022): 27153-27167. [2] Pineda, Luis, et al. “Theseus: A library for differentiable nonlinear optimization.” Advances in Neural Information Processing Systems 35 (2022): 3801-3818. [3] Feydy, Jean, et al. “Fast geometric learning with symbolic matrices.” Advances in Neural Information Processing Systems 33 (2020): 14448-14462. &nbsp; Pdf: /pdf/e42ef9ff87e0c4873a68f91c68a7f858b4cf2055.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present a framework called GAUCHE with comprehensive exploration of Gaussian Processes (GP) and their application to molecular machine learning. The authors thoroughly examine different ways of representing molecular structures - through hand tuned fingerprints, string notations (SMILES/SELFIES/Protein sequences), and undirected graphs. The authors evaluate the proposed kernels and representation schemes across several diverse benchmarks involving regression tasks, uncertainty quantification, and Bayesian optimization (BO). The experiments use diverse datasets, representing a span of property prediction from single molecule and reaction yield prediction from sets of molecules. Finally, the authors evaluate the best-performing GAUCHE kernels in a Bayesian optimization framework using three distinct datasets. Results reveal that GAUCHE's kernels outperforms random baseline, especially in low data regime, highlighting its practical utility in aiding chemists to prioritize synthesis candidates. Overall, this is an impressive work paper with strong theoretical framework with rigorous empirical evaluation. Strengths: 1. This paper introduces a novel, theoretically robust kernel design for Gaussian process regression tailored for molecular data types. 2. The authors demonstrated the utility of the GAUCHE kernels in Bayesian optimization tasks which highlights their real-world applicability, particularly in the context of supporting chemists in candidate selection for synthesis. 3. The authors ensure reproducibility by providing clear explanations and thorough empirical evaluations as well as a very well designed codebase. 4. The paper uses a variety of datasets ranging from properties applicable to drug discovery and material discovery domain, which showcases the adaptability of the GAUCHE kernels across different data. 5. The authors evaluate the GAUCHE kernels across a wide array of molecular input types ranging from fingerprints, strings and graphs highlighting the versatility of the proposed framework. Weaknesses: This paper presents a strong methodological framework and robust empirical evaluation, but it does fall short in some key areas. 1. The authors have shown that GAUCHE framework has been proven to be effective in a number of experiments, the comparison against SOTA neural network is notably lacking. For instance, chemprop, and ChemBERTa. 2. The authors emphasizes GAUCHE's applicability for a low data regime, which is certainly a critical need in many scientific domains. However, in cases where large, high-quality datasets are available. For instance, internal databases at big pharma companies or public repositories like BindingDB. It would be great to see experiments around how the computational requirements, training time, and model performance would be impacted when scaling up the dataset size. 3. It would be great if the authors could explore comparisons between RDKit/FragPrints and proprietary fingerprint techniques such as Dragon FP and Schrödinger, provided the licenses and costs are manageable. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How does the splitting strategy affect the results? For example, what if the data were split based on molecular scaffolds, which would ensure the structural diversity of the test set, instead of a random split? 2. In the context of the Uncertainty Quantification benchmark, how does the predicted variance of the model relate to OOD samples? Could you provide a plot of the Tanimoto distance to the training dataset versus the predicted variance? 3. The authors touch upon the potential use of embeddings of molecules from pretrained models as inputs for the kernels, but there doesn't seem to be any results for this approach. Is this something the authors are considering for future investigations? 4. How would the performance of the kernels be impacted with the changes in the hand tuned fingerprint radius? Have any ablation studies been conducted to investigate this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: 1. The GPR models can be computationally intensive, especially for larger datasets if used in conjunction with large dimensional fingerprints etc. This could limit the applicability in scenarios where abundant high-quality data is available. 2. The performance of predictive models in low data regimes is heavily affected by the choice of dataset splitting method. The results obtained using random splitting might not hold if other splitting methods like scaffold or temporal splitting are used. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; Thank you for taking the time to review our manuscript and for providing detailed, helpful and constructive feedback. We were happy to see you emphasizing the quality of our codebase and empirical evaluation, as well as the practical importance of our work. The main suggestions you raised in your review relate to a characterization of the computational requirements as the dataset size increases, as well as a comparison to state-of-the-art deep learning algorithms and proprietary fingerprints. We sincerely appreciate the feedback and will try to address your points below. &nbsp; ## __Computational Requirements for Larger Datasets__ ## &nbsp; > The authors emphasizes GAUCHE’s applicability for a low data regime, which is certainly a critical need in many scientific domains. However, in cases where large, high-quality datasets are available. For instance, internal databases at big pharma companies or public repositories like BindingDB. It would be great to see experiments around how the computational requirements, training time, and model performance would be impacted when scaling up the dataset size. > The GPR models can be computationally intensive, especially for larger datasets if used in conjunction with large dimensional fingerprints etc. This could limit the applicability in scenarios where abundant high-quality data is available. &nbsp; While black-box optimization tasks in low-data regimes are of substantial practical importance, we agree that it would be very interesting to investigate and extend GAUCHE to work well on larger datasets - for example, as you suggest, large public or private databases or even large DFT- or MD-derived datasets. As standard Gaussian Process inference scales cubically in the number of datapoints, it is likely too expensive to be a viable option in such settings. We have added a tutorial featuring the use of sparse GPs on the lipophilicity dataset (ca. 4200 molecules), which reduce this complexity by only performing inference over a subset of inducing points. &nbsp; ## __Comparison to Proprietary Fingerprints__ ## &nbsp; > It would be great if the authors could explore comparisons between RDKit/FragPrints and proprietary fingerprint techniques such as Dragon FP and Schrödinger, provided the licenses and costs are manageable. While we have not explored the cost and licensing considerations of proprietary molecular fingerprints, we hope that the modularity of GAUCHE helps to facilitate such a comparison, as it would be straight-forward to apply any of our kernels to a dataset of molecules featurized as Dragon or Schrödinger fingerprints. At the reviewer's suggestion to expand the set of featurizations, we have now included Mordred descriptors which achieve state-of-the-art performance on the photoswitch benchmark. We include these results in the attached pdf. &nbsp; ## __Comparison to SOTA Neural Networks__ ## &nbsp; > The authors have shown that GAUCHE framework has been proven to be effective in a number of experiments, the comparison against SOTA neural network is notably lacking. For instance, chemprop, and ChemBERTa. &nbsp; As most SOTA deep learning frameworks (such as Chemprop and ChemBERTa) do not provide out-of-the-box support for BO/UQ it may be difficult to include them directly. However, the ChemBERTa embeddings could be added as an additional featurization. Additionally, we agree that it would be interesting to identify uncertainty-aware and BO-compatible deep learning frameworks for future benchmarking studies. &nbsp; ## __Questions__ ## &nbsp; > How does the splitting strategy affect the results? For example, what if the data were split based on molecular scaffolds, which would ensure the structural diversity of the test set, instead of a random split? > In the context of the Uncertainty Quantification benchmark, how does the predicted variance of the model relate to OOD samples? Could you provide a plot of the Tanimoto distance to the training dataset versus the predicted variance? &nbsp; This is an excellent point. While we have chosen random splits for our current experimental setup to mimic late-stage molecular optimization within a chemical series, we agree that characterizing and comparing the predictive accuracy and calibration of different models in an out-of-distribution regime is a useful additional consideration. We will try to set up the corresponding experiments and add them to the appendix of our manuscript. For the Tanimoto kernel, given that it is using the Tanimoto distance metric directly, we would expect there to be a direct correlation albeit it is unclear what we would expect for non-binary representations. &nbsp; > The authors touch upon the potential use of embeddings of molecules from pretrained models as inputs for the kernels, but there doesn’t seem to be any results for this approach. Is this something the authors are considering for future investigations? &nbsp; There is a tutorial notebook, *pretrained_kernel.py* in the notebooks folder of the library that considers the case of pretrained embeddings. Seeing as this methodology is of interest, we can run more extensive experiments and include them as an additional point of comparison. &nbsp; > How would the performance of the kernels be impacted with the changes in the hand tuned fingerprint radius? Have any ablation studies been conducted to investigate this? &nbsp; We have not carried out any kernel-specific ablations yet, but agree that this would be another interesting plot to add to our appendix, as it would illustrate how the performance of Gaussian Process models changes when they are trained on progressively more expressive molecular representations. &nbsp; ## __Summary__ ## &nbsp; We hope that this additional discussion addresses all of your points. Please let us know if you have any further questions! &nbsp; Sincerely, The Authors &nbsp; --- Rebuttal Comment 1.1: Comment: I appreciate the authors' comprehensive response to the feedback and the steps taken to address the main points raised in the review. The rebuttal addresses my concerns thoroughly, especially in regards to computational requirements for larger datasets, comparison with SOTA methods and the comparison to proprietary fingerprints. However, specific details on why direct comparison with some neural networks might not be applicable could strengthen your argument. Also, the discussion on the computational requirements is comprehensive, and the rebuttal carefully outlined the approach to proprietary fingerprints. Regarding the questions, the authors have provided mostly satisfactory answers, with clear acknowledgments and plans for further investigation. My current scores for this paper remain the same at this stage. --- Reply to Comment 1.1.1: Title: Additional Experiments and Clarification Comment: Thank you for your response! We are happy to hear that our rebuttal thoroughly addressed your questions and concerns. In the following, we would like to clarify our response regarding the direct comparison to deep neural networks and share results from three additional experiments motivated by your suggestions. &nbsp; # __Direct Comparison to Deep Learning Algorithms__ &nbsp; We would like to refer to Appendix A to point out that we do already perform an extensive empirical comparison to a range of state-of-the-art uncertainty-aware deep neural networks [1-4]. We apologise for any confusion and will make sure to feature these results more prominently in the main text of our manuscript, as we agree that they provide an important reference point. In our previous response, we were referring to the fact that the predictions of deep learning frameworks such as ChemProp and ChemBERTa do not provide native estimates of their predictive uncertainty. However, we discovered that ChemProp does provide an ensembling functionality that produces empirical uncertainty estimates that we can directly compare against our GPs. We have thus carried out an additional empirical evaluation of ChemProp on the Photoswitch dataset. Specifically, we created 20 random train-test splits, for each of which we performed 100 iterations of `hyperopt`-based hyperparameter search using the functionalities provided in ChemProp. Using the best hyperparameter combination, we then trained an ensemble of five GNNs and evaluated it on each held-out test set, using the mean and variance of the ensemble predictions to compute the root-mean-squared error (RMSE) and negative log-predictive density (NLPD) to quantify the models' predictive accuracy and the calibration of their predictive uncertainty estimates. As is apparent from the table below, we found the ensembling approach of ChemProp to underperform a Tanimoto-kernel GP. | Method | RMSE (&darr;) | NLPD (&darr;) | | -------- | -------- | -------- | | ChemProp|30.35&pm;1.30|4.53&pm;0.83| | Tanimoto GP |**20.9&pm;0.7**|**0.22&pm;0.03**| &nbsp; # __Results for Scaffold Splits__ &nbsp; > How does the splitting strategy affect the results? For example, what if the data were split based on molecular scaffolds, which would ensure the structural diversity of the test set, instead of a random split? To answer this question, we have re-run parts of our experimental evaluation with 80-20 Bemis-Murcko scaffold splits instead of random splits. As only the lipophilicity dataset exhibits sufficient scaffold diversity to perform such an analysis (the skewness of the scaffold distribution in the others makes an 80-20 split impossible), the following results focus on the predictive accuracy (RMSE) and calibration (NLPD) of GP models in this setting. While this more challenging evaluation setup leads to slightly higher RMSEs and NLPDs, we note that one can observe the same trends as with random splits: Tanimoto-based GPs generally outperform Scalar Product ones, while string kernel-based GPs are better than both. |Kernel| Representation | RMSE (&darr;)| NLPD (&darr;)| |:---:|:---:|:---:|:---:| |Tanimoto|Fragprints| 0.86 &pm; 0.01 | **1.02 &pm; 0.04** | ||Fingerprints| 0.88 &pm; 0.01 | 1.12 &pm; 0.04 | ||Fragments| 0.89 &pm; 0.01 | 2.10 &pm; 0.13 | | Scalar Product | Fragprints | 0.89 &pm; 0.01 | 1.75 &pm; 0.08 | ||Fingerprints| 0.95 &pm; 0.01 | 1.99 &pm; 0.09 | ||Fragments| 1.00 &pm; 0.01 | NaN | |String|SMILES| **0.82 +- 0.01** | 1.08 +- 0.04 | &nbsp; # __Ablation over Fingerprint Radius__ &nbsp; > How would the performance of the kernels be impacted with the changes in the hand tuned fingerprint radius? Have any ablation studies been conducted to investigate this? Motivated by your suggestion, we have performed an ablation study over the hand-tuned radius parameter of extended-connectivity fingerprints (ECFPs). Specifically, we trained Tanimoto-kernel GPs on the Photoswitch dataset using a series of five increasing radius parameters. In the table below, we report the mean and standard error of the RMSE and NLPD over 50 different 80-20 train-test-splits. Intriguingly, these results show a strong negative correlation between the fingerprint radius and predictive performance. We hypothesize that this is caused by the fact that an expanding feature space leads to lower and less informative Tanimoto similarity scores, making it more difficult to train generalisable models. | Fingerprint |RMSE (&darr;)|NLPD (&darr;)| |:---:|:---:|:---:| |ECFP4|**22.65&pm;0.55**|**0.41&pm;0.05**| |ECFP6|23.50&pm;0.55|0.47&pm;0.05| |ECFP8|24.43&pm;0.54|0.52&pm;0.05| |ECFP10|25.17&pm;0.54|0.56&pm;0.04| |ECFP12|25.70&pm;0.53|0.58&pm;0.04| &nbsp; We thank you again for the very helpful feedback and hope that these additional clarifications and experimental results are helpful in addressing any remaining concerns. Please let us know if you have any further questions. Sincerely, The Authors
null
null
null
null
null
null
Adaptive whitening with fast gain modulation and slow synaptic plasticity
Accept (spotlight)
Summary: This paper proposes a normative principle for the symmetric whitening problem, i.e., a batch optimization problem which is provable to attain symmetric whitening is used to derive an online adaptive algorithm. The proposed framework unifies the previous works, and the resulting algorithm maps into single layer network with interneurons and local learning rules. The mathematical findings of this normative approach is also supported with the evidences from neuroscience. The proposed approached is tested with synthetic data and natural images to demonstrate the performance. Potential improvements for more biologically realistic method is also discussed. Strengths: The paper provides a neural network solution to the whitening problem which satisfies two important constraints for biological plausibility: i) the network operates in an online manner, and ii) the weight and gain updates are local. The proposed online algorithm is illustrated to obtain whitening with both synthetic and natural datasets. Weaknesses: The writing is lacking in some parts, and some important details are missing in the paper. More numerical experiments should be presented to assess the introduced framework. For exact details on the weaknesses, please see my comments below and questions section. * Line 83, "... where $f_c(\mathbf{M}_c) = 2 \mathbf{M}_c$" should be changed to "... where $f_c(\mathbf{M}_c) = \text{Tr}(2 \mathbf{M}_c)$" since $f_c$ is a scalar function of matrices. Moreover, even though it is a simple proof, it would be valuable how this unique minimum is achieved. * The intermediate step (stated between line 150 and line 153) to obtain Equation (4) should be written explicitly in the paper (can be in the appendix). * The decoupling the feedforward and feedback weights (in Appendix D.2) is very similar to the idea presented in [i] since the assumption $g_i + m_i \in (0, 2 \eta_w^{-1})$ corresponds to weight decaying. The paper should cite this reference. [i]J.F. Kolen and J.B. Pollack. Backpropagation without weight transport. In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), volume 3, pages 1375–1380 vol.3, 1994. doi: 10.1109/ICNN.1994.374486. * For the experiment with synthetic data, the experiment only considers the whitening matrix of the form $\mathbf{M}_c = \mathbf{I}_N + \mathbf{V} \mathbf{\Lambda}(c)\mathbf{V}^T$. An experiment with more general form of whitening matrix is required to assess the performance of the proposed algorithm. * Hyperparameters (i.e., $\eta_w, \eta_g, \eta_r$) are not provided in the paper (except that the authors provide $\eta_w$ for Algorithm 2 in Appendix C). It would be beneficial for the paper to include a table that provides the hyperparameters used in each experiment * The differences and novelties compared to previous works by Pehlevan and Chklovskii [11] and Duong et al. [18] are discussed quite well, but the comparison in the numerical experiments is missing. * A single Python script is shared for the proposed method, although it does not include the experiment code. The authors mentioned in the paper that the full code would be shared upon publication. However, the current code does not provide a comprehensive understanding of the conducted experiments. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * The numerical experiments presented in Appendix C uses Algorithm 2, but the biological plausibility of it is questionable due to matrix inversion and batch learning. Did you experiment with this setup using Algorithm 1? * The authors propose Algorithm 3 as more biologically plausible algorithm compared to the Algorithm 1, by complying to the Dale's law and decoupling the feedforward and feedback weights for asymmetry (weight transport problem). However, no numerical experiment has been demonstrated for this proposal. Did you test this algorithm? * The value of $\alpha$ seems to be also an hyperparameter for the proposed framework. Did you analyze the effect of this choice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I think that the limitations are adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading of our work and for your suggestions. We regret that you found the writing lacking in some parts. We have revised our paper in accordance with your suggestions, which we believe has improved the overall clarity of the paper. We are concerned that a central contribution of our work may not have been appreciated. Our primary motivation is to develop a biologically realistic circuit for adaptive, _context-dependent_ whitening with complementary computational roles for synaptic plasticity and gain modulation. We formalize existing mechanisms of gain modulation and synaptic plasticity into a joint computation operating across multiple timescales, yielding a new mechanistic theory and solution to context-dependent adaptation in neural circuits. We believe this is a novel and significant contribution, which we will emphasize in our revision. Weaknesses: 1. We corrected the equation and added a short proof that explains how the minimum is achieved. 2. We added a short section to the appendix with the intermediate steps written explicitly. 3. Thank you for pointing us to reference [i]. It is indeed relevant and we've added a citation. 4. A strength of our normative approach is that its performance can be predicted by analyzing the starting objective. In particular, given ${\bf W}$, we can analytically describe the set of covariance matrices that can be whitened by the circuit (see the display after eq 3). Our numerical simulations verify this analytical prediction. Given the analytical nature of our results, we believe that the current set of experiments adequately validates the performance of the circuit. 5. We amended the text to include a table of these hyperparameters. 6. The Pehlevan and Chklovskii [11] and Duong et al. [18] models, which respectively use either synaptic plasticity or gain modulation, correspond to our network in the regimes $\eta_g=0$ and $\eta_w=0$, respectively. A comparison to these regimes is shown in Fig. 4C, where we quantify the test error for our model (red histogram), for fixed gains [11] (green histogram) and for fixed synapses [18] (purple histogram). We amended the text to emphasize this comparison to previous work. 7. We have submitted full code (to the AC via an anonymous link, as instructed) for the synthetic data experiments. Questions: 1. Yes, using the Algorithm 1 for e.g. Appendix Fig C.1 still learns an approximately orthogonal sinusoidal basis for the data. The main difference is that the online algorithm requires far more iterations than the offline algorithm. 2. Yes, see Fig E.2 in the rebuttal PDF. 3. Yes, we've analyzed the $\alpha$ parameter, which corresponds to the leak term in the neural dynamics. In the context of the synthetic experiment when the generative basis, ${\bf V}$, is not orthogonal, a non-zero $\alpha$ parameter is necessary (effectively due to the matrix inversion); when ${\bf V}$ is orthogonal, $\alpha$ is less important and can be set to zero. See Fig E.3 in rebuttal PDF. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal by Authors Comment: I would like thank to the authors for their thoughtful responses. I genuinely appreciate the central contributions made by this work. My intention in offering my comments was to provide constructive insights on aspects that, in my opinion, could be beneficial for further refinement. In this context, I find that this rebuttal and global rebuttal address the concerns and queries I had raised. I think that the promised revisions and provided clarifications have already enhanced or will further enhance the current manuscript. I would like to also thank for the supplementary experiments presented in the global PDF, which are valuable additions to the work. As a result of these considerations, I am inclined to adjust my rating from 5 to 6.
Summary: This paper gives an algorithm for learning weights of a neural network over long time scales, which allow interneurons to decorrelate the responses of excitatory neurons by modulating their gains over short time scales. On both synthetic and natural image datasets, this algorithm is shown to be effective and to generalize to new contexts. Strengths: * This paper makes progress on a fundamental question in neuroscience: How a population of neurons can coordinate their responses to stimuli to encode it. This work should also be of interest in machine learning, as it has obvious applications to transfer learning. * Overall, the paper is clear and easy to understand. * The learning algorithm is online and local. * The technical evaluations are fairly thorough and support the paper's claims. Weaknesses: Some weaknesses are inherited from the Duong et al. model which this paper is building on, specifically related to its biological plausibility: * All neurons are linear. * Dale's law is not observed. * Forward and backward weights are mirrored. * In general, more interneurons than excitatory neurons are required (in the worst case, as many as $\Omega(n^2)$ interneurons for $n$ excitatory neurons), which does not match the ratios observed in the brain. Experimentally, the algorithm fails gracefully when fewer interneurons than required are available. A more realistic version of the model which addresses the first two issues is provided in the appendix. However, given that the scope of this work is primarily modeling a phenomenon in neuroscience, I think the paper would be stronger if this biologically plausible version were featured and evaluated more prominently. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Line 83: Missing a trace on the RHS of the equation? * I would appreciate a more in-depth evaluation, perhaps on a synthetic dataset, of what happens when there are fewer interneurons than required. * Did you consider adding an activation function, at least to the interneurons? I could imagine that there is a similar learning algorithm which allows for fewer nonlinear interneurons. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and for your useful comments. We appreciate your comment that our model has broader implications for ML and applications to transfer learning. Weaknesses: We readily acknowledge there are aspects of our model that are not biologically realistic. This is in part due to the fact that the model is derived from an objective that can whiten any input distribution, which is perhaps unrealistic for a biological system. While we did not have space to fully address this in the main paper, in Algorithm 3 of Appendix D, we considered modifications of our circuit to improve its biological realism. 1. Adding a nonlinearity is certainly worth pursuing in a future study. We will update an included reference on related work by Chapochnikov et al. [6] containing analyses on rectifying interneuron responses. 2. In Appendix D.2 we decouple the feedforward and feedback weight updates and in Appendix D.3 we consider a modified synaptic update rule that enforces Dale's law. In Figure E.2 of the attached PDF, we show that the modified algorithm indeed works on synthetic datasets. 3. See #2 above. 4. First, interneurons outnumbering excitatory neurons is in fact a relevant phenomenon observed in neuroscience, and exists in structures such as the olfactory bulb, where interneurons outnumber primary neurons by up to 100:1 [1]. Second, in Figure E.1 of the attached PDF, we show that even when $K\ll N$, the circuit responses which be much more decorrelated than the circuit inputs. This approximate decorrelation using fewer interneurons can perhaps explain why responses in V1 are only approximately decorrelated [3]. Questions: 1. Thanks for pointing this out. We have corrected this equation. 2. In Fig 4E (natural images example) and Appendix C, we carefully analyze the effect of having fewer interneurons than required. In Fig 4D, we show that the "whitening error" gradually decreases with the number of interneurons $K$ until $K=N$, at which point there is a discontinuous decrease that is due to the fact that we are measuring the difference using the operator norm. We see that even for $K\approx N/3$, which corresponds to the ratio of interneurons to primary neurons in the cortex, the whitening error is much smaller than that for $K=1$. We have now also included the case with no interneurons ($K=0$) and therefore no whitening - see Figure E.1 of the attached pdf. 3. This is great suggestion for an extension to the model! Adding a nonlinear activation to the interneuron activations may indeed allow for a smaller number of interneurons $K$. Recent related work by Chapochnikov et al. [6] analyzes the role of rectifying interneurons in the whitening olfactory circuit in the context of similarity-matching-based adaptation. Here, we were focused on introducing the novel concept of multi-timescale whitening, and wish to keep the computation as analytically tractable as possible with this added layer of complexity. Any definitive claims on the whitening capability of our network with nonlinearities warrants a separate, follow-up study with extensive numerical/empirical investigation. [1] Shepherd, G. M. (Ed.). (2003). *The Synaptic Organization of the Brain*. Oxford University Press. [3] Benucci, A., Saleem, A. B., & Carandini, M. (2013). Adaptation maintains population homeostasis in primary visual cortex. *Nature Neuroscience*, 16(6), 724-729. [6] Chapochnikov, N. M., Pehlevan, C., & Chklovskii, D. B. (2023). Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction. *Proceedings of the National Academy of Sciences*, 120(29), e2117484120. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! I maintain that this is a good paper and stand by my original score.
Summary: The authors produce a mechanistic model that combines synaptic plasticity and gain modulation to adaptively whiten responses. This model is constructed from an objective for learning a whitening transformation and then considering matrix factorization. Simple factorization introduces interneurons and optimization via gradient descent can be mapped to synaptic plasticity which adapts the weights to a particular context. Another decomposition that enforces a particular diagonalization leads whitening via gain modulation when the weights of the interneurons are fixed. The contribution of this paper is to combine these two mechanisms into a single cost function and network instantiation in which the gain optimization produces a whitening for a given context, and the synaptic plasticity adapts to properties useful over multiple contexts. The describe an implementation of this optimization in a recurrent neural network and consider different time scales, in which gain adaptation happens quickly within a context and synaptic plasticity happens over a longer timescale. The authors test this framework on simulated data that matches the decomposition assumptions and show that the method is successful. They further test it on natural images and show that the learned connectivity successfully adapts to new contexts. Strengths: This is a very nice paper that connects the computational problem of whitening with multiple neural mechanisms into a single normative framework. While it still has issues with full biological plausibility (as noted by the authors) this is an interesting step in thinking about this computation and the roles of these neural mechanisms. The presentation is very clear (Figure 2 is particularly nice). Weaknesses: There are no significant weaknesses in this paper. To the authors’ credit, every question I had that I thought was in the category of “necessary for publication” they addressed. I’ll leave everything else to “Questions”, below. The biggest weakness I see regards the generality of the matrix decomposition. The extent to which the authors test or address this appears on par with other approaches in the field, though. I have some questions below on this topic. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can you make any statements about how general the matrix decomposition in Section 3.3 is? It seems very specific. You demonstrate numerically that this decomposition works for your experiment. When do you expect it to fail? Can you characterize or quantify the differences in context necessary before this decomposition no longer works? Related to this, it looks like you’re using images from the van Hateren database. Can you try the held out image experiments with images from very different contexts and origins, since those images all have clear similarities in large scale properties? This might help find the limits of this algorithm/decomposition. I was going to ask about breaking the symmetry on the weights, which you answered in an appendix. They will asymptotically align, but can you say something about this timescale? Is there a restriction on when the alignment has to happen with respect to the gain modulation and synaptic plasticity that is meaningful or prohibitive? You provide the error as a function of number of interneurons for your natural image experiment. Can you compute some normalization so that there is an idea of what errors are meaningful (maybe the Frobenius norm of C)? Your interneurons are not necessarily the “interneurons” people talk about in real circuits, but it makes one imagine nonetheless that you will have far fewer interneurons than primary neurons. Is this a functional problem given the amount of error you see? Maybe it’s “good enough” even with few interneurons? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors do not discuss societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive review and thoughtful questions about the work! Questions: 1. We can be precise in describing the set of covariance matrices that can be whitened with this decomposition. In particular, when the inverse whitening matrices lie in a $K$-dimensional linear subspace, then this representation can match the whitening matrices with $K$ interneurons. Since the covariance matrices of natural images approximately share an eigen-basis, this representation can approximately match the corresponding whitening matrices with $K=N$. When the inverse whitening matrices do not lie in a low-dimensional linear subspace, we expect that the representation will require $K$ to be much larger, which can perhaps explain why some circuits have many more interneurons than primary neurons (e.g., olfactory bulb). 2. This is an interesting question which can be answered by appealing to well-known results from the image/video coding literature: a sinusoidal basis (e.g. the DCT) approximates the Principal components (Karhunen Loeve Transform) for natural images [4,5]. Our algorithm learns an approximately orthogonal sinusoidal basis as interneuron weights ${\bf W}$ without supervision (Fig 4D, 4F, C.1B, C.2); therefore, this trained model is well-suited to any held-out test images (e.g., outside the van Hateren database) whose statistics do not deviate too far from those of natural images. 3. Yes, as we show in Appendix D.2 that the convergence to symmetric weights is exponential. If the weights are order 1, then the exponential convergence rate is on the order $-\log(1-\eta_w)\approx\eta_w$ when $\eta_w$ is small. That is, the convergence rate is determined by the learning rate for the synaptic weight updates. 4. Thanks for pointing out this issue of interpretability of the error plots. To provide a better sense of error improvement, we've added a horizontal reference line to show what the error would be with no interneurons (i.e. without whitening). 5. For olfactory bulb, it seems that this proportion of interneurons is in fact a relevant scenario: interneurons can outnumber excitatory neurons by up to 100:1 [1]. We do show in Fig 4E and Appendix C that our network transitions gradually to the regime where $K\ll N$. In this regime when $K<N$, the error is indeed significantly improved (relative to no whitening --- horizontal line in PDF Fig E.1). [1] Shepherd, G. M. (Ed.). (2003). *The Synaptic Organization of the Brain*. Oxford University Press. [4] Ahmed, N., Natarajan, T., & Rao, K. R. (1974). Discrete cosine transform. *IEEE Transactions on Computers*, 100(1), 90-93. [5] Bull, D., & Zhang, F. (2021). *Intelligent image and video compression: communicating pictures.* Academic Press. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I really enjoyed the paper. I will maintain my score.
Summary: This paper proposes a neural circuit model that combines fast gain modulation and slow synaptic plasticity to adaptively white sensory inputs. It appears that this paper is a combination of the studies of ref. 11 and 18. Strengths: ### Originality The strength of this paper is that it combines the fast gain modulation and slow synaptic plasticity in a whitening neural circuit, and addresses the shortcomings of earlier models with either gain modulation or synaptic plasticity. This is the most conceptual and technical advance of this study. ### Clarity Overall, the paper is well written. But see my comments and questions on some parts I am unclear about. Weaknesses: I don't have major concerns about this study from the math point of view. Nonetheless, I have a significant concern from the neuroscience point of view. The derived updating rule of gain (Eq. after line 174) depends on the current weight. The neural "gain" is typically adjusted by inhibitory interneurons, and thus I am concerned about how interneurons "sense" the synaptic weights. If there are neurobiological studies supporting this updating rule, please cite the reference in the paper, otherwise, it is better to discuss it at the end. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 91: ref. 11 requires $K \geq N$, i.e., the number of I neurons is larger than the E neurons. Does this requirement also need in this paper? Is there a way to make the number of I neurons less than the E neurons? This is also the case in the cortex. - It seems that the $f_c(M)$ function was used before but the writing still looks abrupt to me about why to propose an objective function with such a form. - The paragraph starting at line 90. It is better to first explain the decomposition $M_c = W_c W_c^T$ regards to the weights between E and I neurons. Otherwise, people can easily regard $W_c$ as the feedforward weights. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I realize the framework assumes the synaptic weights between excitatory (E) and inhibitory (I) neurons are symmetric with each other by just differing a sign. Is there a way to break this symmetry? Will the symmetry breaking of E-I weights facilitate optimization? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and for your thoughtful questions. Weaknesses: Thank you for voicing this concern. Although it's not so apparent in the main text, we've resolved this issue in Appdx D.1 . Specifically, we scale the column vectors of ${\bf W}$ to have constant norm 1. This effectively replaces the $\text{diag}({\bf W}^\top {\bf W})$ term with a constant vector of ones, so gain updates do not need to "sense" the synaptic weights. This is reminiscent of the synaptic scaling/synaptic redistribution mechanisms discussed in Abbott \& Nelson [2]. Additionally, when the weights are near-optimal, then they are approximately constant across contexts and the target term is effectively constant. Questions: 1. Yes, for cases such as in the cortex when $K<N$, it's still possible to produce whitened responses provided the set of (inverse) whitening matrices lie within a $K$-dimensional subspace. Further, some experiments in cortex have shown that primary neurons reduce redundancy after adaptation, but are not perfectly whitened [3]. The partially whitened solution found when $K<N$ may provide an explanation for this effect. 2. Thanks for pointing this out, we will amend the text as you suggest to improve clarity. 3. Thanks, we will make this change in conjunction with the previous one. Limitations: We discuss weight decoupling in Appendix D.2. But it is worth mentioning that reciprocal connections between excitatory and inhibitory neurons have been observed in structures such as the olfactory bulb [1], where dendrodendritic synapses give rise to symmetric connectivity matrices (although, not necessarily symmetric weight matrices). An exploration of circuit function with asymmetric weights (e.g. for non-symmetric whitening transforms) is an interesting direction to pursue. We have some preliminary findings suggest that non-symmetric weights can lead to non-symmetric whitening transformations. [1] Shepherd, G. M. (Ed.). (2003). *The Synaptic Organization of the Brain*. Oxford University Press. [2] Abbott, L. F., & Nelson, S. B. (2000). Synaptic plasticity: taming the beast. *Nature Neuroscience*, 3(11), 1178-1183. [3] Benucci, A., Saleem, A. B., & Carandini, M. (2013). Adaptation maintains population homeostasis in primary visual cortex. *Nature Neuroscience*, 16(6), 724-729. --- Rebuttal Comment 1.1: Comment: Thanks authors' reply which addresses my concerns. Thus I increase my score from 6 to 7.
Rebuttal 1: Rebuttal: Thank you for your careful reading of our work and for your helpful comments. We have revised our paper in accordance with your suggestions and provide individual responses below. Here we list general changes and additions to the manuscript. 1. **Adaptation with fewer interneurons than primary neurons:** As the reviewers correctly pointed out, many circuits in the brain have far fewer interneurons than primary neurons (e.g., in the cortex the ratio is approximately 1:3). In Figure E.1 of the attached PDF (which is an updated version of Figure 4E from our original submission to include the case $K=0$), we show the performance of our algorithm on the images dataset when $K<N$. We see that even when there are relatively few interneurons (e.g., $K=4$), the whitening error is much less than when $K=0$. Perhaps neural circuits balance the benefits of reducing response correlations with the increased metabolic costs of having more interneurons. Finally, we mention that there are neural circuits in the brain where interneurons greatly outnumber primary neurons; e.g., in the olfactory bulb where the ratio of granule cells to mitral cells is upwards of 100:1 [1]. 2. **Biological realism:** There are aspects of our model that are not biologically realistic. In our original submission, we briefly listed these aspects in the Discussion and in more depth in Appendix D, where we presented a more biologically realistic online algorithm (Algorithm 3). We have now tested Algorithm 3 on a synthetic dataset and found that it successfully learned optimal filters and was able to adaptively whiten the data using gain modulation, see Figure E.2 of the attached PDF. 3. **Effect of hyperparameter $\alpha$:** We tested the effect of varying the $\alpha$ parameter on the synthetic dataset, Figure E.3 of the attached PDF. We find that when the column vectors of ${\bf V}$ are orthogonal, varying $\alpha$ does not affect the performance of the algorithm; however, when ${\bf V}$ are not orthogonal, there is a *slight* degradation of performance when $\alpha$ is not exactly 1. This is due to the fact that if ${\bf V}$ is orthogonal (and full rank), then $\alpha{\bf I}+{\bf V}\Lambda {\bf V}^\top$ can be decomposed as ${\bf V}(\alpha{\bf I}+\Lambda){\bf V}^\top$, so the basis vectors for the (inverse) whitening matrix do not depend on $\alpha$. [1] Shepherd, G. M. (Ed.). (2003). *The Synaptic Organization of the Brain*. Oxford University Press. Pdf: /pdf/ee03f5bcae7a9b8196b20ddd9d23d02ff9b6a286.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection
Accept (poster)
Summary: This paper uses the vision-language pre-trained model CLIP for long-tailed object detection. The key idea is to consider not only image-level semantics but also region-level semantics, and fuse them under a soft-label scenario. The overall idea is deployed on different object detectors with different backbones. Experiments are conducted on three long-tailed object detection datasets with extra data. Besides, the experiments and discussion on ablation studies and foundational models are very extensive. Strengths: + To the best of the reviewer’s knowledge, it is the first work (excluding CVPR2023) to use vision-language pipelines for long-tailed object detection. The contribution is significant. + The ablation studies and other discussions are very extensive, in both the main submission and the + This paper is well-written and easy-to-follow. Weaknesses: - This work, in its current form, lacks theoretical insight, especially on how the proposed method fits into the long-tailed feature distribution. As the proposed method is targeted at the context of long-tailed object detection, not the generic object detection, it is very necessary to extensively discuss how the framework and the specific module design can work well or fit into the long-tailed feature distribution. Unfortunately, as the reviewer goes through the entire manuscript: In Fig.1 motivation case and Fig.2 framework, there is no element for the long-tailed feature space or long-tailed elements. Besides, in the methodology section, from learning the image / object semantics to use soft labels, all these presentations are generic and are suitable for generic recognition, detection, segmentation and etc. More importantly, in the experimental section, there is no feature space visualization on how CLIP benefit the long-tailed feature distribution. - The state-of-the-art comparison is insufficient, and to some extent, unfair. Specially, It is really strange that in Table 1, the authors only compare some generic vision transformer based detectors such as DERT. Why the methods, using vision-language pre-trained model for generic object detection, such as [11,18,27,34,57], are not involved for comparison on the long-tailed object detection datasets? It is very likely that these methods [11,18,27,34,57] for generic vision-language object detection can work well on long-tailed detection datasets. - The performance of the proposed method seems to be mainly brought high by the use of Swin-Transformer. This remark is not saying that the state-of-the-art performance with Swin-Transformer backbone is not important, but the reviewer would like to raise the attention that, when use the old ResNet-50 backbone (Row 1 of Table 1), the proposed method, even with vision-language pre-trained model, does not lead a significant performance against some SOTA in 2021. Thus, the reviewer feels that in the future, why with Resnet-50 is not that effective for the proposed framework, worth to be further investigated. Other minor issues to improve: - Extensive feature visualization on the context of both long-tail and CLIP features, please. - Please use \mathcal{} to present the loss function, and use \mathbf{} to present the tensors and vectors. These notations can distinguish them from scalars. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The reviewer really appreciate that the work is the first to extend vision-language pre-trained model to the long-tailed object detection task. However, the weakness of this submission is also very obvious. In the rebuttal, please address the weakness part point-by-point, on lack insight for long-tailed context, lack state-of the-art comparison, and to-some-extent performance ineffectiveness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitation is not properly discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### q1: Insight for long-tail context. Thanks for pointing out this. Exploring extra data is indeed a direct and effective approach to mitigating data scarcity. Our primary motivation arises from the fact that classification data is easy to collect and offers a more balanced distribution compared to detection data. For instance, consider the ImageNet-21k and LVIS; there are 997 overlapped categories in ImageNet-21k, covering a significant portion of the LVIS categories with a 10$\times$ number of images, greatly enhancing instance diversity. Furthermore, the samples in ImageNet-21k exhibit a balanced distribution (please refer to Fig.1 in our **rebuttal pdf**). Therefore, leveraging classification datasets demonstrates great potential in alleviating data scarcity. However, it is still challenging, summarized as (1) semantic ambiguity and (2) location sensitivity. Previous works mainly focus on box estimation to solve location sensitivity but neglect semantic ambiguity and the importance of rich semantics inside classification data. Fortunately, V-L pre-trained models (CLIP) have demonstrated powerful zero-shot recognition capabilities, benefiting from web-scale training data. Our pilot experiments (please refer to Tab.1 and Fig.2 in the **rebuttal pdf**, and Sec. B in supplementary) illustrate that CLIP models exhibit balanced performance across rare, common, and frequent categories; and show robustness towards location shift. Given these compelling factors, it becomes natural to utilize CLIP models and delve deeply into leveraging the rich semantics from classification data to address long-tail object detection effectively. ### q2: Lack of state-of-the-art comparison Current research on vision-language (V-L) pretraining and foundational models for long-tailed object detection can be categorized into two main directions: #### 1) **Leveraging the well-aligned Vision-Language knowledge of the pre-trained** These works have primarily focused on open-vocabulary detection tasks. For a fair comparison, we thus conduct an experiment on open-vocabulary LVIS, which can be viewed as an extremely long-tailed distribution where tail categories have zero occurrences. * Open Vocabulary LVIS detection results compared with [11,18,27,34,57] | Method | backbone |AP | AP_novel |---- | ---- |---- |---- | | CLIP [34] + gt bbox| ViT-B| 17.7 |18.9 | 18.8 | 16.0 | GLIP-zeroshot [27]| Swin-L| 26.9 | 17.1 | 23.3 | 35.4 | | ViLD [11] | R50| 27.5 |17.4 | - | - | | RegionCLIP [57]| R50| 27.4 | 17.0 | 26.7 | 32.9 | | Ours| R50 | 31.5 | **23.0** | 30.9 | 36.0 As shown in the table, our method surpasses the mentioned previous sota on their benchmark with nearly 6 AP gains on novel categories, further demonstrating the effectiveness of our method. #### 2) **Scaling up the pretraining data and model size** Researchers employ large vision foundation models and advanced training techniques to achieve excellent performance, regardless of the computational burden and training recipe consistencies. In contrast, we utilize the advanced Focal-H to compare with these works on LVIS detection. * Results on LVIS val v1.0 | Method | backbone | $D_{backbone}$ | param|AP | AP_r | AP_c | AP_f| |---- | ---- |---- |---- |---- |---- |---- |---- | | ViTDet| ViT-H-MAE | IN-1k | 692M | 53.4 | - |- | | EVA | EVA-H | merged-30M | 1.1B | 62.2 | 55.1| 62.2| 65.2 | | InternImage| DCNv3-H | merged-data | 2.2B |63.2 |- |- | - | | Ours| Focal-H | IN-22k | 747M | 61.2 | **61.2** | 60.1 | 62.4 The table shows our model achieves comparable performance with 0.8B fewer parameters, offering a pretty balanced performance for both overall and rare categories. Notably, we did not use other advanced training tricks, e.g., large image size (1.5$\times$ larger) and batch size training (16 vs. 64), and their additional inference post processings (soft NMS and TTA). ### q3: Performance "ineffectiveness" on ResNet-50 Sorry for the confusion. In fact, our method is indeed **effective** on R50. Two main reasons are contributing to the "ineffectiveness" : 1. Backbone pretraining. The previous sota (Detic) utilized R50-21k [1] (refer to R50$\star$ in Tab.1 of the main paper), offering strong perception capability for downstream detection. In contrast, we just use the R50-1k, which is significantly weaker than R50-21k. 2. The size of the CLIP model providing semantics. We utilize CLIP-RN50 as the default semantics provider. However, the previous sota uses CLIP-ViT-B to generate the weight of the classifier. We observe that stronger semantics lead to better performance (L297-L300 in our main paper), indicating that our method can be further improved. Based on these, we conducted experiments with the stronger backbone and semantics provider. Our best model surpasses the previous sota by a large margin (4.1 AP and 6.1 AP_r), indicating its effectiveness on the R50 backbone. | Method | Backbone | CLIP model |AP | AP_r | AP_c | AP_f| | ---- |---- |---- | ---- |---- |---- |---- | | Detic | R50-21k | CLIP-Vit-B | 36.8 | 31.4 | 36.0 | 40.1 | Ours | R50-1k | CLIP-RN50 | 37.1 | 29.9 | 35.6 | 42.0 | Ours | **R50-21k** | CLIP-RN50| 40.1 |36.2 | 38.2 | 44.0 | Ours | **R50-21k** | **CLIP-RN50x4** | 40.9 | 37.5 | 39.6 | 43.8 Thanks for pointing it out and for the great inspiration; we will update this table in the next version. [1] Imagenet-21k pretraining for the masses. In NeurIPS, 2021. ### q4: Limitation part. A main limitation of our approach is treating detection data and classification data statically with strict equality in the unified objective. An optimal scenario might dynamically entail prioritizing $L_{cls}$ for instances that can be accurately categorized while making $L_{soft}$ precedence in other situations. ### q5: Suggestions on fonts of notations and feature visualization Thanks for your suggestions, and we will refine the fonts to make them distinguished and visualize the features in the next version. --- Rebuttal Comment 1.1: Title: Response to author rebuttal by Reviewer vo8U Comment: Thanks for the authors to provide such a detailed rebuttal. Indeed, my concerns on Question 1, 2, 3 and 4 have been well addressed, which I appreciate a lot. However, before the discussion end period, could the authors also provide how the proposed pipeline can improve the feature space of long-tailed tasks against the baseline? I feel this is necessary for the scope of NeurIPS, and it can convince me to raise my score above accept threshold. Many thanks, and look forward to the update ! --- Reply to Comment 1.1.1: Comment: Thank you for your timely response and valuable suggestions. We have performed a visualization of object features to effectively demonstrate the advantages of our approach within the feature space. Although we had planned to upload the visualization to provide a deeper insight, unfortunately, this year's guidelines prohibit external links during the discussion period. After consulting with the ACs, they suggested that "authors can promise and describe the visualizations they are planning to add and what they show in words." Following the guideline, we provide a detailed description of our visualization pipeline and the resulting outcomes below, and we apologize for the inconvenience. Firstly, we extract **object features** from the validation set corresponding to their ground truth bounding boxes. These extracted features are then projected into a 2D space using PCA. For a clear visualization, we randomly select six categories, encompassing two rare, two common, and two frequent categories. To visualize the distribution, we normalize the features and employ Gaussian Kernel Density Estimation (KDE) in $\mathbb{R}^2$, following [1]. This visualization technique offers us a way to compare the distribution of object features across categories. Furthermore, it provides a comparative analysis between the baseline and our proposed RichSem. As a result, we have a visualization similar to Fig.3 in [1]. The visualization indeed shows a clear distinction between the two models. Regarding the baseline, the distribution of object features lacks differentiation, often resulting in overlapping patterns among categories, especially between rare and frequent categories. In contrast, in RichSem, features belonging to each category, even rare categories, are well-clustered. This clear intra-class and inter-class distribution indicate that our approach effectively enhances the region classification capability of diverse categories. These observations highlight the effectiveness of our proposed method in long-tail object detection, particularly in improving the performance of rare categories. We greatly appreciate your valuable insights and suggestions. We promise to include the visualization in our next version. [1] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." ICML, 2020.
Summary: - This work deals with Long-tail object detection. Authors identify two problems with using additional data, namely Semantic ambiguity and Location sensitivity. - Authors identify that semantic ambiguity arises due to supervision with one-hot encoded labels from the image datasets and instead propose to use CLIP scores for supervision. - CLIP's ability to provide sufficient semantic information even with course locations is leveraged to tackle location sensitivity. - Authors propose RichSem, a simple yet effective method, that adds an additional semantic branch to the detector to learn rich semantics from images. - The semantic branch is only required during training and with extensive experiments is shown to achieve state-of-the-art results on LVIS dataset in overall and rare categories. Strengths: - The paper is well written and the presentation makes it easy to follow. - The experiment section is exhaustive and supports all the claims made by the authors. Authors test their method with the transformer and R-CNN based family of detectors and the ablation experiments clearly explain the contribution of each component. - RichSem is a principled way to leverage additional data for long-tailed object detection. Weaknesses: - Authors miss the comparison with [1], which also uses additional data. Please compare appropriately in Table-1. - The current work heavily relies on CLIP but it is widely known that CLIP has several limitations [2]. It would be interesting to address the robustness of the current method to the limitations of CLIP. [1] Bo Li, Yongqiang Yao, Jingru Tan, Xin Lu, Fengwei Yu, Ye Luo, and Jianwei Lu, Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning. [2] https://stanislavfort.github.io/blog/OpenAI_CLIP_stickers_and_adversarial_examples/ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Authors mention constructing a mosaic with multiple images in L51, where in the whole method is this done? - What is $f_v^i$ in Eq. 4? Is that $s^t$ obtained in Eq. 2? - From table 2c, rows 4,5 the increase in improvement is because of image level labels. But how are the annotations used in the whole pipeline? The semantic branch only uses CLIP similarities for the KL divergence loss? Is a hard supervised loss also being applied for the image labels? Is there a localization term for the image level labels? - What is $f^t$ in the semantic branch in Fig. 2? If $t\in (\text{loc},\text{cls})$, then is the semantic soft loss applied to both $f^{\text{loc}}$ and $f^{\text{cls}}$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I do not foresee any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### q1: Comparison with [1] Thanks for the good suggestion. We incorporate our method into Faster-RCNN, employing R50 as the backbone for an appropriate comparison following [1]. The table shows that our method achieves a strong performance and surpasses the previous sota by more than 3 AP on rare categories. Unlike CLIS[1], our approach focuses on digging rich semantics with only pre-defined bounding boxes on extra classification data. Thanks to our semantic learning scheme, our model can effectively leverage the information within classification data for long-tail object detection, achieving a more balanced performance between head and tail categories. We greatly appreciate your suggestion and will update the table in our next version. | Method |AP | AP_r | AP_c | AP_f| |---- | ---- |---- |---- |---- | | Faster RCNN| 24.1 | 14.7 | 22.2 | 30.5 | EQLv2| 25.5 | 16.4 | 23.9 | 31.2 | | BAGS| 26.0 | 17.2 | 24.9 | 31.1 | Seesaw| 26.4 | 17.5 | 25.3 | 31.5 | EFL| 27.5 | 20.2 | 26.1 | 32.4 | MosaicOS| 23.9 | 15.5 | 22.4 | 29.3 | CLIS| 29.2 | 24.4 | 28.6 | 31.9 | Ours | 30.6 | **27.6** | 29.7 | 32.9 [1] Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning. ### q2: Robustness of CLIP Thank you for your suggestion. We do agree CLIP is not robust towards adversarial attacks like many other types of neural networks. In this work, we assume images are pristine, and we will study the robustness of CLIP in the future. ### q3: Questions on mosaic augmentation Thanks. We only apply 2x2 mosaic augmentation on the **extra classification data**. Specifically, we utilize a pre-defined whole-image bounding box for each image and use the mosaic augmentation [1] to randomly concatenate sub-images into a mosaic, thus offering coarse locations on classification data. Unlike previous works on extra data focusing on bounding box estimation, such as using a pre-trained detector as region generators [2], online predictions [3], and post-processing methods [4], we emphasize that coarse locations are sufficient and pay more attention on introducing rich semantics from the classification data to the detector. [1] Yolov4: Optimal speed and accuracy of object detection [2] Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning [3] Detecting Twenty-thousand Classes using Image-level Supervision [4] MOSAICOS: A Simple and Effective Use of Object-Centric Images for Long-Tailed Object Detection ### q4: Eq.4 Sorry for the confusion. In Equation 4, $f^t_{i}$ refers to the corresponding semantic guidance obtained from CLIP, denoted as $s^t$ earlier. In Eq. 4, we compute the Kullback-Leibler divergence between the soft semantic prediction and the corresponding semantics provided by CLIP models. For the sake of consistency, they should be $o^{soft}$ and $s^{t}$ instead of $f^t_{i}$ and $s^t$ in Eq.4. We will definitely refine the notations in the next version. ### q5: Questions on rows 4-5 of Tab 2c Both line 4 (ImageNet-Unl) and line 5 (Image-LVIS) incorporate the soft KL loss, hard classification loss, and location loss in our training pipeline. For the soft KL loss and location loss, both line 4 and line 5 follow the same approach. They utilize pre-defined whole-image bounding boxes as pseudo-locations and extract semantics based on these coarse locations, forming the soft targets of the semantic branch. Regarding the hard classification loss, line 5 employs image-level labels that are mapped to the LVIS taxonomy as the target. On the other hand, for line 4, the pseudo hard labels for classification are generated by using the class with the highest logits in each semantic target (as described in Eq. 7). Additionally, we incorporate a threshold $th$=0.05 to filter out images that are too irrelevant to the taxonomy. Due to the distinctions above, line 5 (Image-LVIS) slightly outperforms line 4 (Image-Unl). This improvement can be attributed to the well-matched taxonomy between the extra data and the target detection data, along with a well-designed label mapper. However, it is noteworthy that the difference between Image-LVIS and Image-Unl is relatively small (only 0.3 and 1.8 on AP and AP_rare). This observation shows our potential for unlabeled classification/object-centric data. ### q6: Questions on f^t $f^t$ in Fig.2 represents the object-level features from CLIP (please refer to Section 3.1 and Equation 2). More specifically, $f^t$ is derived by pooling CLIP features based on the corresponding bounding box. For detection data, this bounding box corresponds to the tight ground-truth bounding box, while for classification data, it involves a coarse whole-image bounding box. The object-level CLIP feature $f^t$ is then employed to derive $s^t$ by calculating the Contrast with linguistic categories features $f^{cat}$, which is soft semantic guidance in our training scheme. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed rebuttal. I'm satisfied with the author's response and would recommend adding these discussions to the final version. After looking at the other reviews and the author's rebuttals, I vote to accept this paper.
Summary: To address semantic ambiguity and location sensitivity, this paper introduces a one-stage training framework that leverages additional image data to boost the detector through learning from rich semantics and coarse locations for long-tailed object detection. And their RichSem achieves consistent improvements on both overall and rare-category of LVIS under different backbones and detectors. Strengths: 1. The paper is clearly written. And main idea of the paper is easy to understand. 2. This paper introduces a novel semantics learning framework, which uses an additional branch to learn from rich semantics and coarse locations for long-tailed object detection without the need to compute pseudo labels. 3. Extensive method demonstrates significant results on long-tailed datasets Weaknesses: 1. In Long-tailed object detection, the scarcity of samples or natural constraints results in a limited number of instances in the tail classes. Can exploring extra data effectively address the issue of scarce tail classes in practical applications? 2. CLIP is a large-scale model designed for the joint processing of images and text. The lack of representative samples for tail classes may lead to a relatively weak understanding and recognition ability of CLIP for these classes. Can CLIP still provide stable semantic guidance? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the paper weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please check the paper weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### q1: Can exploring extra data effectively address data scarcity? Yes. Indeed, exploring additional data is a straightforward approach to enhancing the performance of tail categories. However, acquiring bounding box annotations for these rare categories is labor-intensive and costly. Recognizing this challenge, we introduce a new approach that leverages existing image classification data to alleviate the data scarcity issue faced by tail categories. Unlike conventional semi-supervised or weakly supervised methods, our approach does not require estimating bounding boxes, which is challenging. Instead, we focus on rich semantics obtained by CLIP models and leverage coarse locations from classification data for improved results. We conduct comprehensive experiments involving varying backbones, detectors, schedules, and datasets. Our method shows consistent gains in all experiments, demonstrating that our method can effectively address the challenge caused by the scarcity of detection data. ### q2: Can CLIP provide stable semantics on rare categories? Yes, CLIP models showcase balanced recognition capabilities across rare, common, and frequent categories thanks to the web-scale image-text pairs as training data (see supp sec.B, we also include the table here for convenience). We use CLIP-RN50 and perform region classification on LVIS utilizing ground truth bounding boxes. Specifically, we obtain object features according to their ground truth bounding boxes and classify them using the contrast with textual features of categories. As shown in the table below, the AP of the top 10 predictions per proposal is around 34%, indicating that CLIP can properly rank labels into the top classes. Furthermore, a key observation is that the results highlight a well-maintained and balanced performance across categories with varying frequencies. | Region classification |AP | AP_r | AP_c | AP_f| |---- | ---- |---- |---- |---- | | top1 class per proposal| 16.2 | 16.7 | 16.4 | 15.7 | top5 class per proposal| 29.6 | 29.9 | 29.3 | 29.8 | top10 class per proposal | 33.9 | 33.1 | 33.7 | 34.6 We also find that CLIP models are robust towards location shifts. As shown in Fig.2 in the **rebuttal pdf**, we gradually introduce noise to ground truth boxes, and the top-10 performance experiences only a marginal drop when the noise scale remains relatively small (ranging from 0 to 0.5). These observations underscore CLIP's capacity to provide stable and precise semantic information. It allows our method to effectively address data scarcity by leveraging extra classification data. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarification, which solves most of my concerns. But im my humble opinion, the performance of this method heavily relies on more extra data and the CLIP model, which limits the potential for widespread impact of the work. Can authors further discuss this issue? --- Reply to Comment 1.1.1: Title: Response to Reviewer ndNj Comment: Thanks for your response and suggestions. Given the natural statistics of long-tail distribution, current detectors easily bias towards the head categories and show poor performance on the tail categories, limiting their broader applications. Our proposed method effectively addresses this challenge by capitalizing on extra classification data and leveraging the knowledge encoded in pre-trained vision-language models (VLMs), making the detectors more practical and broadly used. It's important to highlight that leveraging extra data [1,2,3], such as weakly annotated or unlabeled data, and the knowledge from pre-trained vision-language models (VLMs) [3,4,5] shows great potential in scenarios with limited training resources of tail categories. Compared to the previous works on improving the training recipe like loss re-weighting [6], data re-sampling [7], augmentation [8], and decouple training [9], it can directly address the data scarcity by increasing the amount and diversity of training instances. As for long-tail object detection, we can utilize CLIP models as semantics providers and classification data, like ImageNet-21k, as extra data, both of which are readily available. Compared to collecting and annotating detection directly, leveraging classification data is more efficient without additional data collection and annotation. Besides, the great generalization capability of VLMs makes us extract rich semantics without any fine-tuning. Overall, these allow for the straightforward application of our method to the detector, resulting in effectiveness and efficiency. Additionally, our proposed semantic branch helps the detector learn the soft semantics within extra data and enhance the feature representation during training, and it can be removed during inference, improving the method's flexibility for use with various detectors. Furthermore, our method can be extended to different types of extra data, including well-label-mapped classification data (INet-LVIS), unlabeled classification data (INet-Unl), and web-collected image-text pairs (CC3M-Unl). It's also adaptable to various sizes of CLIP models (CLIP-RN50, CLIP-RN50x4, CLIP-RN50x16), as illustrated in the tables below. These show the potential for our method to be further applied to large-scale data and better vision-language pre-trained models. Extra data | AP | AP_r | |---- | ---- |---- | None | 32.2 | 24.1 CC3M-Unl | 34.0 | 24.8 (+4.6) | INet-Unl | 34.7 | 28.6 (+4.5) | INet-LVIS | 35.0 | 30.4 (+6.3) Semanatics provider | AP | AP_r | |---- | ---- |---- | None | 32.2 | 24.1 CLIP-RN50 | 35.0 | 30.4 (+6.3) CLIP-RN50x4 | 36.0 | 33.0 (+8.9) CLIP-RN50x16 | 36.2 | 31.9 (+7.8) Thanks for the constructive suggestions, and we will add the discussion in our next version. [1] Zhang, Cheng, et al. "Mosaicos: a simple and effective use of object-centric images for long-tailed object detection." CVPR. 2021. [2] Li, Bo, et al. "Improving Long-tailed Object Detection with Image-Level Supervision by Multi-Task Collaborative Learning." arXiv preprint, 2022. [3] Zhou, Xingyi, et al. "Detecting twenty-thousand classes using image-level supervision." ECCV, 2022. [4] Zhong, Yiwu, et al. "Regionclip: Region-based language-image pretraining." CVPR, 2022. [5] Gu, Xiuye, et al. "Open-vocabulary Object Detection via Vision and Language Knowledge Distillation." ICLR, 2021. [6] Tan, Jingru, et al. "Equalization loss for long-tailed object recognition." CVPR, 2020. [7] Gupta, Agrim, Piotr Dollar, and Ross Girshick. "Lvis: A dataset for large vocabulary instance segmentation." CVPR, 2019. [8] Ghiasi, Golnaz, et al. "Simple copy-paste is a strong data augmentation method for instance segmentation." CVPR, 2021. [9] Kang, Bingyi, et al. "Decoupling Representation and Classifier for Long-Tailed Recognition." ICLR, 2019.
Summary: This paper adopts the CLIP model to obtain a 'soft label' supervision to train the detector under the long-tail distribution dataset and derive rich semantics from the CLIP part to enhance the tail-categories representations, which can be removed during the inference. The authors claim that the CLIP model can well capture visual semantics conditioned on only coarse locations, the whole-bbox for extra data in this paper. They then elaborate soft-label supervision to train the detector so as to alleviate semantic ambiguity and location sensitivity issues. Various ablative studies have been conducted to validate the effectiveness of the proposed method. Strengths: 1. The novelty of this paper sounds technically reasonable. 2. The writing of this paper is easy to follow. 3. The final performances look competitive and the improvements are obvious. Weaknesses: 1. In Line 172, the subscript of f and o maybe should be superscript. 2. The proposed semantic branch training is parallel to the detection heads during the training to avoid the training conflicts for the CE loss and KL loss, but the detection heads are only trained on the tailed samples in LVIS without some kind of interactions on the semantic branch and how to ensure that the detection heads can well handle the rare categorical samples during inference? 3. ImageNet-21K is used as extra data in this paper, but the backbone is pre-trained in ImageNet too. So is there any dataset overlapping between the pertaining and fine-tuning stages? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please regard the weakness parts. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please regard the weakness parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### q1: L172 subscripts. Thanks for pointing it out. We will definitely fix it in the next version. ### q2: How can semantic learning on extra classification data boost rare categories detection? The detection head is trained not only on the LVIS dataset but also on the extra image classification dataset. More specifically, we use a unified objective function $L=L_{loc}+L_{cls}+L_{soft}$ (as detailed in Section 3.3), treating both detection data and extra classification data in a unified way. Consequently, the detection head is trained using both types of datasets. For classification data without bounding box annotations, we employ pre-defined whole-image boxes as pseudo boxes and group multiple images together to create mosaics, which provide location supervision for the detection head. In conclusion, our method can help rare classes in two ways: 1) Rich semantics ($L_{soft}$): the semantic supervision flows back to the object features, implicitly enhancing the feature representation and localization capability; 2) Coarse locations ($L_{loc}$): our training scheme allows the detector to learn from pre-defined pseudo locations on the classification data through $L_{loc}$; To further demonstrate the effectiveness of the two perspectives, we conduct an ablation study on $L_{soft}$ and $L_{loc}$ within the training recipe for classification data. The results, shown in the table below, demonstrate that both rich semantics and coarse locations play significant roles in boosting long-tail object detection, offering significant performance gain on overall AP and rare AP. |Method | AP | AP_r | AP_c | AP_f| |---- | ---- |---- |---- |---- | | w/o $D_{extra}$ | 32.2 | 24.1 |29.9 |38.3 | | + $L_{soft}$ (rich semantics only) | 33.6 | 28.6 (+4.5) | 32.4 | 37.2 | | + $L_{loc}$ (coarse location) | 35.0 | 30.4 (+1.8) | 33.1 | 39.0 | ### q3: Data overlapping between backbone pretraining and detection training Yes, there is data overlap between the backbone pretraining and detection training. There are 246 categories that overlap between ImageNet-1k and LVIS, and 997 categories overlap between ImageNet-21k and LVIS. Dataset | Number of Imgs | Definition| |---- | ---- | ---- |LVIS | 0.1M | The original LVIS |INet-1k | 1M | The original ImageNet-1k |INet-21k | 14M | The original ImageNet-21k |INet-LVIS | 1M | INet-21k classes overlapped with LVIS |Method | $D^{backbone}$ | $D^{od}$|AP | AP_r | AP_c | AP_f| |---- | ---- |---- |---- |---- |---- |---- | | w/o $D_{extra}$ | INet-1k | LVIS | 32.2 | 24.1 |29.9 |38.3 | | w/o $D_{extra}$ | INet-21k | LVIS | 35.7 | 25.9 | 35.0 | 40.7 | Ours | INet-1k | LVIS + INet-LVIS | 35.0 | 30.4 | 33.1 | 39.0 | | Ours | INet-21k | LVIS + INet-LVIS | 37.5 | 32.4 | 36.0 | 41.5 | We further conduct experiments with R50 backbones pretrained with different amounts of data under the 1$\times$ schedule. The table shows that pretraining on large-scale data can provide strong perception capability for the downstream detection task, with overall performance gain. However, the performance on rare categories is still relatively low, indicating that this approach does not alleviate the long-tail effects in detection. In addition, the pretraining cost will increase significantly with 10 times more pretraining data. In contrast, our method is more effective than pretraining to handle long-tailed detection, especially for the tail categories. Notably, our approach is still effective with strong pretrained backbones, further improving performance on long-tailed object detection. Thanks for the inspiration, and we will add the discussion on data overlapping in the next version. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thanks for your detailed explanation and more experiments. In sum, most of my concerns have already been addressed. I keep my initial rating 'borderline accept'. Moreover, I suggest the authors should carefully take into account about the further demonstrations in rebuttal phase into their revised version, especially 'q2: How can semantic learning on extra classification data boost rare categories detection?'
Rebuttal 1: Rebuttal: First of all, we sincerely appreciate all your valuable comments and suggestions. We are pleased that all reviewers think our paper is well-written and easy to follow. We are encouraged that reviewers find our proposed RichSem with reasonable novelty (YRik), significant results (ndNj), and extensive ablation and discussion (gFff, vo8U). We carefully read the comments and attempted to provide comprehensive responses accordingly. Please find the rebuttal below each official review. We hope the responses could answer the questions raised by reviewers and address any concerns about our work. Thanks again to all reviewers for the time and effort! Pdf: /pdf/319801a82f207d2297e321062c1783f44d786876.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound Framework
Accept (poster)
Summary: In this paper, the authors consider Bayesian bandit algorithms where exact posterior is not available and only approximations are available. The authors prove that if $\alpha_1$-divergence and $\alpha_2$-divergence between the exact posterior and approximation are small, then a modification of Bayesian UCB (EBUCB) achieves $O(\log T)$ regret. In the previous work, it is known that Thompson sampling with approximate inference has $\Omega(T)$ regret. The authors also show negative results for EBUCB and Thompson sampling under the condition on only one $\alpha$-divergence. Finally, in synthetic environments the authors confirm their theoretical findings. Strengths: 1. This paper solves an important problem (Bayesian bandit algorithms using an approximation of the posterior) and provides a nice result (the upper bound has $O(\log T)$-regret and the dominant term does not depend on $\varepsilon$, which is surprising). 2. Although the environments are simple and synthetic, they conducted experiments. Weaknesses: 1. In Corollary 3.8, the dependence of $\varepsilon$ is hidden (not discussed). 2. The optimal parameters of the algorithm are not known in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Corollary 3.8, does the small $o$ notation hides $\varepsilon$? Could you briefly discuss its dependence? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable comments. We address the reviewer's concerns below. Q1. "Dependency on $\epsilon$". The regret bound indeed contains $\epsilon$. The exact finite-time upper bound is provided in Step 4 in the proof of Theorem 3.7 (We only show the most dominant term of the regret bound in the main paper to improve the readability). It includes the explicit dependence on $\epsilon$, which is $M_{\epsilon,1}$ and $M_{\epsilon,2}$ there, and obviously the error about $M_{\epsilon,1}$ and $M_{\epsilon,2}$ increase as $\epsilon$ increases. However, this dependence on $\epsilon$ does not appear in the dominating term. The main reason is that the exact posterior will be more "concentrated" on the true mean with little variability as the time t increases, and the impact from the $\epsilon$ error will vanish. We have some related discussions in Remark 3.9. To elaborate more, we use a simple example to provide some intuitions. Consider two actions with true mean rewards 0.3 and 0.4. Then after time t, their exact posteriors will be approximately Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)), which is ``concentrated” more and more around the mean rewards 0.3 and 0.4 as t increases. The alpha divergence between Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)) goes to infinity when both N_1(t) and N_2(t) go to infinity (which is true by the information lower bound). Therefore, the $\alpha$-divergence between the exact posteriors of two actions will keep increasing as t increases (finally larger than $\epsilon$). Hence, intuitively speaking, the $\epsilon$ error can only “substantially” impact the regret up to some time step $T_0$, and this additional error is indeed captured by our error bound: You can see the terms $M_{\epsilon,1}$ and $M_{\epsilon,2}$ that depending on $\epsilon$ in Step 4 in the proof of Theorem 3.7. However, they are not the dominating term since after time $T_0$, $\epsilon$ error cannot impact the regret too much as the alpha divergence between Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)) has been sufficiently large. Q2. "Optimal Parameters". The exact values of $\alpha_1$ and $\alpha_2$ depend on the users’ choice of the Bayesian inference algorithms. Our theoretical results are built upon a general Assumption 3.1 where the Bayesian inference algorithms are not specified. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for clarifications. I would like to keep the current scores. --- Reply to Comment 1.1.1: Comment: Thank you very much. We appreciate the reviewer for your reading and reply.
Summary: This paper studies Bayesian bandits with approximate inference errors. The authors proposed an algorithm called the Enhanced Bayesian Upper Confidence Bound (EBUCB). Under a two-bounded $\alpha$-divergence assumption, the authors show that EBUCB can achieve the optimal logarithmic regret. The authors also show that with sub-linear regret cannot be achieved with only one-bounded $\alpha$-divergence. Strengths: The authors provide the first $\log(T)$-type of regret for Bayesian bandits with constant approximation error (under 2-bounded $\alpha$-approximation). This result is obtained based on a novel sensitivity analysis of quantile shift. Both the result and the analysis look interesting to me. The authors also show that, under 1-bounded $\alpha$-approximation, one cannot obtain sub-linear regret; this negative result further justifies the necessity of 2-bounded $\alpha$-approximation (Assumption 3.3). Weaknesses: My main concern is that the developed results are only for Bernoulli bandits, not for more general distributions (not even for the Gaussian distribution). Can authors comment on why Bernoulli is needed in the current analysis? Or what prevents the derived results from being extended to other distributions? Another question I have is regarding the constant approximation error for the 2-bounded $\alpha$-approximation (i.e., the $\epsilon$ term in Assumption 3.3). It seems a bit weird to me that the regret bound doesn't depend (too much) on the $\epsilon$ (e.g., in Corollary 3.8). Can authors comment on the reasons behind this? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for the careful reading and valuable comments. We address the reviewer's concerns below. Q1. "Bernoulli settings and the potential extension to other distributions". One of the major techniques in our analysis is Lemma C.1, which provides tight upper and lower bounds on the tails of approximate distributions to control the quantiles chosen by the EBUCB algorithm. In general, this bound depends on specific distributions, which are Beta distributions plus inference errors in our setting. It is possible to generalize this bound to a certain family of distributions. For instance, Kaufmann, E., [2018] extends the bound for Beta posterior distribution to the exponential family that includes Gaussian (without approximate inference). We believe that by combining Kaufmann, E., [2018] with our techniques in Section 3.2, our results could be generalized to the exponential family with approximate inference. This, however, requires some additional careful technical derivation beyond our current bounds in Bernoulli with approximate inference, which will be our future research direction. Q2. "Dependency on $\epsilon$". The regret bound indeed contains $\epsilon$. The exact finite-time upper bound is provided in Step 4 in the proof of Theorem 3.7 (We only show the most dominant term of the regret bound in the main paper to improve the readability). It includes the explicit dependence on $\epsilon$, which is $M_{\epsilon,1}$ and $M_{\epsilon,2}$ there, and obviously the error about $M_{\epsilon,1}$ and $M_{\epsilon,2}$ increase as $\epsilon$ increases. However, this dependence on $\epsilon$ does not appear in the dominating term. The main reason is that the exact posterior will be more "concentrated" on the true mean with little variability as the time t increases, and the impact from the $\epsilon$ error will vanish. We have some related discussions in Remark 3.9. To elaborate more, we use a simple example to provide some intuitions. Consider two actions with true mean rewards 0.3 and 0.4. Then after time t, their exact posteriors will be approximately Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)), which is ``concentrated” more and more around the mean rewards 0.3 and 0.4 as t increases. The alpha divergence between Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)) goes to infinity when both N_1(t) and N_2(t) go to infinity (which is true by the information lower bound). Therefore, the $\alpha$-divergence between the exact posteriors of two actions will keep increasing as t increases (finally larger than $\epsilon$). Hence, intuitively speaking, the $\epsilon$ error can only “substantially” impact the regret up to some time step $T_0$, and this additional error is indeed captured by our error bound: You can see the terms $M_{\epsilon,1}$ and $M_{\epsilon,2}$ that depending on $\epsilon$ in Step 4 in the proof of Theorem 3.7. However, they are not the dominating term since after time $T_0$, $\epsilon$ error cannot impact the regret too much as the alpha divergence between Beta(0.3N_1(t), 0.7N_1(t)) and Beta(0.4N_2(t), 0.6N_2(t)) has been sufficiently large. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their rebuttal. I'd like to keep my current scores and suggest the authors add related discussion into the paper during revision. --- Reply to Comment 1.1.1: Comment: Thank you very much. We appreciate the reviewer for your reading and reply. We will make sure to add these discussions into the final version of our paper.
Summary: This paper considers the standard multi armed bandit problem with a prior on rewards, allowing the design of Bayesian algorithms such as Thompson sampling and Bayesian UCB. The problem of interest is when the exact posterior distributions are not available. Rather an approximate posterior is available. It has been known that even with a small constant alpha divergence error between the true and approximate posterior, Thompson sampling does not converge. This paper shows that with a two bounded alpha divergence (see Assumption 3.3) Bayesian UCB achives order optimal regret bound. Strengths: The problem of designing Bayesian optimization algorithms which find the optimal action in the presence of approximate distributions is very interesting. Weaknesses: The setting is motivated with the difficulty of obtaining true distributions that often arises in complex models and when using methods such as variationally inference. The results are however proven on a system of Bernoulli distributions where the posterior is available in closed form. I think this break the logic on motivation to a great extent. While I tried to read all the proofs and details, I could not obtain a good intuition into why small alpha divergence is not enough and two alpha divergence works; how can this be used when applied to for example MCMC or variational inference. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could authors provide more intuition into the significance of their results for the complex settings where the distributions are approximated. It seems the results are limited to simple settings where the posterior is easily obtained. Could authors intuitively explain why small alpha divergence is not enough and two alpha divergence works. 2. As small alpha divergence is not enough and only two alpha divergence works in this setting, one would expect Theorem 3.7 to fail when $\alpha_1=\alpha_2$. In line 254, it is stated that we may choose $\zeta=\frac{1}{\tilde{\alpha}_2}$. If, in addition, we set $\alpha_1=\alpha_2$, implying $\tilde{\alpha}_1=\tilde{\alpha}_2$, the upper bound in Theorem 3.7 still seems to work. Could authors explain what happens here when $\alpha_1=\alpha_2$ and if the Theorem fails. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As mentioned above, the main limitation of the paper seems to be the simple setting, where the posteriors are available in closed form. The results do not seem to be extendable to more complex setting where approximate distributions are actually relevant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer for the careful reading and valuable feedback. We address the reviewer's concerns below. Q1. "Significance of current results for the complex settings". The key contribution conveyed in this paper is the theoretical insights and guidelines that positively support the practical use of approximate Bayesian inference in bandits. This, to the best of our knowledge, has not been addressed in the previous literature, even in Bernoulli bandit problems. Our results contribute to understanding the approximate Bayesian bandit methods with complex settings from the following aspects: 1) Our study takes the very first step in investigating the theory paradox in approximate Bayesian bandit methods. As such, we validate our current framework on a basic problem setting, the Bernoulli bandit problems, and consider the more general bandit problems as our future research directions. Our framework is generic, and could be further extended to more complex settings. Below is a direction to undertake for more complex settings: Generalization to the exponential family: Kaufmann, E., [2018] extends the bound for Beta posterior distribution to the exponential family that includes Gaussian (without approximate inference). By combining Kaufmann, E., [2018] with our techniques in Section 3.2, our results could be generalized to the exponential family with approximate inference. This, however, requires some additional careful technical derivation beyond our current bounds in Bernoulli with approximate inference, which will be our future research direction. 2) Our study provides theoretical support for the superior performance of approximate Bayesian bandit methods, which is not limited to the basic settings. With bounded inference error, Phan et al. [2019] indicated negative theoretical results in multi-armed bandit problems, which contradicts the superior performance of approximate Bayesian bandit methods in practice. To this end, our work resolves this paradox by showing positive results, and further provides direct guidance for real-world algorithm design. 3) The two $\alpha$ s should be in different regions (one $\alpha$ greater than 1, and the other $\alpha$ less than 0) to guarantee that $P_2$ is close to $P_1$ from both “directions”. As we have discussed after Assumption 3.3, “Intuitively speaking, minimizing $D_\alpha(P_1, P_2)$ when $\alpha$ is large (greater than 1), $P_2$ is flattened to cover $P_1$’s entire support, while when $\alpha$ is small (less than 0), $P_2$ fits the $P_1$’s dominant mode.” Therefore, with one bounded alpha divergence, one can only guarantee that $P_2$ is close to $P_1$ from one “direction”, which could lead to the degenerating performance when using the approximate distribution. Q2. "$\alpha_1$ = $\alpha_2$". In Assumption 3.3, we explicitly state that two parameters should satisfy that $\alpha_1 > 1$ and $\alpha_2 < 0$. Therefore, Theorem 3.7 excludes the setting where $\alpha_1$ = $\alpha_2$ or $\alpha_1$ is close to $\alpha_2$. In particular, the two $\alpha$ s should be in different regions (one $\alpha$ greater than 1, and the other $\alpha$ less than 0) to guarantee that $P_2$ is close to $P_1$ from both “directions”. As we have discussed after Assumption 3.3, “Intuitively speaking, minimizing $D_\alpha(P_1, P_2)$ when $\alpha$ is large (greater than 1), $P_2$ is flattened to cover $P_1$’s entire support, while when $\alpha$ is small (less than 0), $P_2$ fits the $P_1$’s dominant mode.” Therefore, with one bounded alpha divergence, one can only guarantee that $P_2$ is close to $P_1$ from one “direction”. Moreover, in general, if Assumption 3.3 holds for $\alpha_1 <0$ and $\alpha_2 < 0$, even if $\alpha_1$ and $\alpha_2$ are different, we cannot obtain a sublinear regret in Theorem 3.7, where the counterexamples can be similarly constructed as in Theorem 3.12/3.13. To further address the author’s concern, we will revise and add the following to the introduction section to clarify that two $\alpha$ s should be in different regions: “However, we will provide a novel theoretical framework and point out that the answer could be 'Yes' when the inference error measured by two different $\alpha$-divergence is bounded where one $\alpha$ is greater than 1, and the other $\alpha$ is less than 0 (which guarantees that the approximate posterior is close to the exact posterior from both “directions”). ” --- Rebuttal Comment 1.1: Comment: Thank you for your response and clarifications. That answers my misunderstanding on the choice of alphas. I still find the contribution limited given that in the Bernoulli case the posteriors are available in closed form and that affects the motivation of the setting. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the reviewer for your reply and for increasing our score. We are glad to hear that our response helped clarify the choice of alphas. Regarding the problem setting, we understand the limitation raised by the reviewer, and we will leverage our techniques in Section 3.2, especially the bounds related to two alpha divergences, to study more general bandit problems and algorithms as our future research direction.
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Accept (poster)
Summary: This paper presents a significant step towards a general-purpose vision-language model with instruction-following abilities. It is built upon the existing BLIP-2 model and instruction-following dataset LLaVA-Instruct-150K [1]. It further extends the scale of instruction-tuning by automatically converting public vision-language datasets such as VQAv2 into instruction-following formats. The authors compare the instruction-tuning paradigm with classic multi-task learning and show that language instruction is the key to generalizing to unseen task instructions. Strengths: This work extends multi-modal instruction-tuning to a wide range of datasets and tasks spanning 11 diverse categories. Robust evaluation is performed by instructing-tuning on 13 held-in datasets and zero-shot evaluating on another 13 held-out datasets. The authors demonstrate the generalization capabilities of their proposed approach by comparing with classic multi-task learning on both held-in and held-out datasets. The paper is well-written and easy to follow. All assets used in this work including the instruction-tuned models and datasets are released to the public. Implementation details such as instruction-aware Q-Former, balanced dataset sampling, and inference strategies like vocabulary reranking are empirically effective and technically sound. Weaknesses: InstructBLIP uses LLaVA-Instruct-150K [1] as one of the instruction-tuning datasets. However, there is no direct comparison to the open-sourced LLaVA model which is trained solely on this dataset. Apart from this, I do not find any major weaknesses from this work. However, I would be curious to know if InstructBLIP has stronger vision-language reasoning capabilities than widely-adopted CLIP-like models, which are known to exhibit bag-of-word behaviors [2,3,4,5,6]. [1] Visual Instruction Tuning. Liu et al. 2023. [2] When and why vision-language models behave like bags-of-words, and what to do about it? Yuksekgonul et al. 2022. [3] Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. Thrush et al. 2022. [4] CREPE: Can Vision-Language Foundation Models Reason Compositionally? Ma et al. 2022. [5] Equivariant Similarity for Vision-Language Foundation Models. Wang et al. 2023. [6] Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores. Lin et al. 2023. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: There are some minor suggestions to improve the clarity of the paper: - L98: What is the "standard" language modeling loss given multimodal instruction-following samples? Do you adopt the same LM loss as LLaVA or did you use the captioning loss as BLIP-2? In other words, is the LM loss enforced on all tokens including both instruction and response tokens? - L132-134: How do you decide the dataset balancing ratio of each task? Do you use held-in or held-out scores? - L140: What are the evaluation metrics for open-ended generation task? What is the sampling procedure? i.e., nucleus/top-k with temperature? I would be happy to revise my rating if the authors can address my above-mentioned weaknesses and questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors adequately addressed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you a lot for your review and insightful comments. We have addressed your questions in the following. ---- **Q1**: Comparison to LLaVA. **A1**: Our paper provides two comparisons between InstructBLIP and LLaVA: 1. A qualitative comparison using the same images and prompts (Appendix B). 2. Finetuning comparison on the ScienceQA benchmark (Table 3), where InstructBLIP achieves 90.7% accuracy over LLaVA’s 89.0%. We were unable to provide further quantitative comparisons, as the LLaVA paper did not evaluate their model on other public benchmarks. For a more comprehensive understanding of InstructBLIP's performance, we refer to three follow-up papers [1,2,3] that provide systematic evaluations from different perspectives, which demonstrate the superior performance of InstructBLIP over LLaVA in most scenarios. ---- **Q2**: Bag-of-word behaviors. **A2**: We believe that InstructBLIP has stronger vision-language reasoning capabilities and less bag-of-word behaviors than CLIP-like models. Intuitively, LLMs (which CLIP does not have) may provide strong language representations, which are sensitive to word ordering, and our generative instruction tuning objective may encourage finer-grained representations than the contrastive learning objective of CLIP. In the paper, we demonstrate SOTA zero-shot results on GQA, a QA dataset focused on compositionality. The dataset requires locating objects described by attributes (e.g., green door, large container) and spatial relations (e.g., Is there a bag to the right of the green door?). Bag-of-words representations that are insensitive to word ordering will likely fail on GQA questions. For example, “is there a green door to the right of the bag” is a completely different question from “is there a bag to the right of the green door”. Follow-up evaluation work lends further support to our claim. [1] shows that InstructBLIP achieves the best performance (among 18 models) on tasks such as scene understanding, instance identity, instance attributes, instance location, instance counting, and spatial relations. [2] finds that InstructBLIP has the least hallucination when compared with mPLUG-Owl, LLaVA, MiniGPT-4, and MultiModal-GPT. [3] conducts a comprehensive evaluation of 12 VL models on two critical abilities: perception and cognition, where InstructBLIP demonstrates very strong performance. These achievements require robust reasoning and precise recognition of attributes, objects, and relations. Overall, these results empirically demonstrate the strong vision-language reasoning capabilities of InstructBLIP. ---- **Q3**: Suggestions to improve the clarity of the paper. **A3**: Thank you for your valuable suggestions, we will revise and improve the clarity of these points accordingly in the next version of our paper. 1. The LM loss is only enforced on the response tokens. 2. We first use the equation on L130 $p_d = \frac{\sqrt{S_d}}{\sum_{i=1}^D \sqrt{S_i}}$ ($\{S_1, S_2, \dots, S_D\}$ are dataset sizes and $p_d$ is the probability of a sample being selected from the dataset $d$) to compute the data sampling ratio, where datasets with more samples have higher chances to be sampled. Then, we make manual adjustments according to the difficulty of individual tasks. Specifically, we finetune the model on each individual dataset and check how many epochs it takes to converge using the validation sets. We increase the sampling ratio for datasets that take longer to converge. 3. We adopt beam search with a beam size of 1 for HatefulMemes, VSR, and OCR-VQA, 3 for NoCaps, and 5 for the other tasks. ---- **References** [1] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension, Li et al., 2023. [2] Evaluating Object Hallucination in Large Vision-Language Models, Li et al., 2023. [3] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models, Fu et al., 2023. --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thank you for the comprehensive response. I have two follow-up questions: **1. LLaVA's paper report 90.92% finetuning performance on ScienceQA (Table 6 in their arxiv paper)? Where is the 89.0% coming from?** Please note that I only ask this question for clarification. I think it is more important to evaluate on a wide suite of VL benchmarks instead of hill-climbing on a single dataset like LLaVA did. **2. In terms of bag-of-words behaviors, existing benchmarks such as Winoground [1] and EqBen [2] are formulated as image-text retrieval tasks. I wonder if it is possible to extend InstructBLIP to such tasks, because the ITC/ITM head of BLIP was no longer available.** **References**: [1] Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. Thrush et al. 2022. [2] Equivariant Similarity for Vision-Language Foundation Models. Wang et al. 2023. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your response. We would like to clarify your follow-up questions: 1. Sorry for the ambiguity in our previous response. As we mainly focus on the vision-language performance, we only evaluate the **IMG** set of the ScienceQA dataset (i.e., the subset with the image context). For the **IMG** set, InstructBLIP achieves 90.7% accuracy, while LLaVA achieves 88.0%, and LLaVA+GPT-4 reaches 89.0%. 2. Thank you for this interesting question. As demonstrated in VisualGPTScore [1], on the Winoground and EqBen benchmarks (Table 11), BLIP2-FlanT5 achieves comparable performance to models with the ITC/ITM head. It is done by calculating the generative score $p(\text{text}|\text{image})$. Therefore, it is possible to also extend InstructBLIP to these benchmarks. VisualGPTScore [1] also shows that the generative score could mitigate the bag-of-word behavior. **References** [1] VisualGPTScore: Visio-Linguistic Reasoning with Multimodal Generative Pre-Training Scores, Lin et al., 2023.
Summary: The paper presents InstructBLIP, a vision-language instruction tuning framework to solve a wide range of visual-language tasks through a unified multimodal interface. The authors conduct a comprehensive study on vision-language instruction tuning, transforming 26 datasets into the instruction tuning format and grouping them into 11 task categories. They propose instruction-aware Qformer, to enable BLIP models with instruction following capabilities. InstructBLIP models achieve state-of-the-art zero-shot performance on a wide range of held-out vision-language tasks. Strengths: - The paper provides a comprehensive study of vision-language instruction tuning, covering a wide variety of tasks. - The proposed InstructBLIP extends BLIP with an instruction-aware Q-Former and finetunes the model on both instruction-following datasets and VQA datasets, showing better performance than previous models. - The paper is generally well-written and ablates the model's design of the instruction-aware Q-Former. Weaknesses: 1. The paper does not provide enough analysis and the comparison on the open-ended out-of-domain multimodal question answering capability. This is one of the most impressive results that GPT-4, MiniGPT-4, and LLaVA shows. The paper claims in L208 "Although all models are capable of generating long-form responses, InstructBLIP’s outputs generally contains more proper visual details and exhibits logically coherent reasoning steps. Importantly, we argue that long-form responses are not always preferable." Without enough evidence, it is hard to justify this point. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the instruction-aware Q-former work on multi-turn conversations? For example, `<image><Q1><A1><Q2>`, what corresponds to the `<instruction>` part in the Fig. 3? Is it `<Q1><A1><Q2>` or `<Q2>` only? In both cases, user can ask complex questions that may be hard for a lightweight LLM like Qformer to interpret. For example, (1) "can you explain your answer with more detail?" (2) "That is not correct, that is not what I am asking. Please try again." In these cases, it seems to be tricky for Qformer to extract the corresponding visual features, either because the region to extract visual features is unclear from `<Q2>` only (1), or it seems hard to correctly extract the information without sophisticated reasoning capabilities (2). What is the benefit of Qformer in these cases? 2. For the Q-former architecture in Fig. 3, to my understanding, there will be several consecutive yellow blocks and instruction tokens will be updated via self-attention and feed-forward (green) throughout different blocks, while the updated tokens will not be fed into the LLM. Is this understanding correct? If so, the figure does not reflect (1) multiple consecutive blocks; (2) instruction tokens being updated between blocks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author does not discuss limitations in the paper or appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable review and comments. We would like to address your questions as follows: ---- **Q1**: Analysis and comparison on the open-ended out-of-domain multimodal question answering. **A1**: We have provided qualitative comparisons with GPT-4, MiniGPT-4, and LLaVA on open-ended out-of-domain visual questions. The comparisons are included in the appendix that can be viewed in the supplementary material. Our claims are supported by such comparisons. Since qualitative comparisons can be subjective to interpret, our paper provides a systematic quantitative evaluation on a wide range of benchmarks, showing the state-of-the-art performance of InstructBLIP models on many out-of-domain vision-language tasks, including multimodal question answering. These follow-up papers [1][2][3][4] also demonstrate the strong performance of InstructBLIP over existing models (MiniGPT-4, LLaVA) on a wide range of challenging vision-language tasks. For example, [4] tests object hallucination with adversarial objects, where InstructBLIP performs the best compared to other VL models. [1] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models, Fu et al., 2023. [2] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension, Li et al., 2023. [3] LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models, Xu et al., 2023. [4] Evaluating Object Hallucination in Large Vision-Language Models, Li et al., 2023. ---- **Q2**: Instruction format. **A2**: In multi-turn conversations, we use all historic context as the instruction. Therefore, for a two-turn conversation, the instruction is formatted as <Q1><A1><Q2>. ---- **Q3**: Q-Former feature extraction. **A3**: Since all dialog context is present in the instruction, the Q-former can learn to extract visual features according to both the current question and the historic context. The instruction-aware Q-former mechanism is designed to benefit cases where the instruction provides useful guidance to extract specific visual features. In cases where the instruction does not contain useful information, the Q-former can simply ignore the instruction and extract general visual features. ---- **Q4**: About Fig. 3. **A4**: The reviewer’s understanding is correct. The Q-former contains multiple consecutive transformer blocks, where the output from the previous block will be given as input to the next block, and only the queries' features from the last layer will be fed to the LLM. We will revise the figure to make this clear. ---- **Q5**: About limitations. **A5**: We have included a section called "Broader Impact" in the appendix, which discusses limitations of our work. We will make it more complete and clear. --- Rebuttal Comment 1.1: Title: In the era of foundation models it is very hard to determine what is truly "out-of-domain". Comment: I disagree with Reviewer r68F on that InstructBLIP does not report on *"open-ended out-of-domain multimodal question answering capability"*. I don't think that is a scientific way to benchmark models by showing some qualitative results on a few (and likely cherry-picked) samples. Very soon everyone will be overfitting their models to these few selected samples. From this perspective, I think InstructBLIP does a great job on evaluating on existing and well-established tasks and strictly following the standard ML practice of train/val split. --- Rebuttal Comment 1.2: Title: Follow-up discussion Comment: Thank you for your response and for providing the latest benchmark papers. After carefully reading the authors' response and the four papers provided in the response, I have two main concerns that would like to discuss with the authors: 1. Benchmark (Q1) - There is a large discrepancy between the qualitative and quantitative evaluation. The qualitative results provided in Figure 1 are quite different from the distribution of the academic benchmark datasets in Table 1. - In LVLM-eHub [3], InstructBLIP ranks poorly on LVLM Arena (Fig. 1 (b) and (c)), which is evaluated by humans. The paper concludes that "InstructBLIP performs best on in-domain capability evaluation, while being much worse than many instruction-tuned models, implying a severe overfitting issue." This makes me more concerned about the performance of the InstructBLIP in real-world scenerios. - Why is Visdial treated as a held-out test dataset? It uses 120K images from COCO, which is also used by held-in datasets (COCO Caps, VQAv2, OKVQA, A-OKVQA). I suggest the authors verify that there is no overlap in images/annotations between the held-in and held-out datasets. - The authors conduct ablation studies on the model design in Table 2 on a selection of 5 held-out datasets. Is it still fair to include these 5 datasets in Table 1 as held-out zero-shot test datasets? 2. Architecture (Q2/Q3) - Regarding InstructBLIP training for multiple turn conversations, could you please clarify how it works? Since the output of Qformer is different for each turn, do we need to add $32k$ latent tokens for a $k$-turn conversation? Do we need to do the same thing for inference? - Is there any evidence that Qformer can handle long and complex instructions? If "Q-former can simply ignore the instruction and extract general visual features", is 32 tokens enough? --- Reply to Comment 1.2.1: Title: Response to follow-up questions Comment: Thank you for reviewing the rebuttal. We would like to address your follow-up concerns: ---- **Benchmark** **Q1**: There is a large discrepancy between the qualitative and quantitative evaluation. The qualitative results provided in Figure 1 are quite different from the distribution of the academic benchmark datasets in Table 1. **A1**: We agree that the distributions of quantitative and qualitative samples are different. We have made every effort to conduct a systematic and comprehensive evaluation using public benchmarks. Additionally, we have addressed out-of-domain cases in our qualitative results, as shown in both Fig. 1 and Appendix B of the supplementary material, to demonstrate our model's capabilities. However, we believe that a qualitative comparison between models is not exhaustive, as it encompasses only a limited number of cases and can be subjective. ---- **Q2**: About LVLM-eHub. **A2**: We believe the Arena in the LVLM-eHub is not a reliable evaluation due to the following reasons: 1. The paper does not describe how many samples were collected in the Arena leaderboard. 2. The June 29 version of the Arena Ranking, which can be found on the LVLM Hub website (as links are not allowed in the response), differs significantly from the June 13 version (Fig.1c in their paper), even though no new models were added. We believe that the current Arena is not stable enough to accurately demonstrate the true capabilities of various models. One potential reason could be the limited number of samples. 3. It is a bit hard to distinguish between *in-domain* and *out-of-domain* in this evaulation. In the quantitative results from LVLM-eHub, many datasets they utilized were not included in InstructBLIP's training and, thus, cannot be classified as *in-domain*. Despite this, InstructBLIP still demonstrates strong performance on these datasets. Furthermore, for the Arena evaluation, it cannot be directly treated as *out-of-domain*, since we are unaware of the data provided by the users. 4. InstructBLIP significantly outperforms other models in avoiding the issue of object hallucination, making it more reliable and trustworthy. One potential reason is that InstructBLIP can adaptively adjust the length of its responses, as illustrated in Appendix B. In contrast, many other models consistently produce lengthy paragraphs, a pattern they learned during training. While this tendency might result in more hallucination, users might favor longer responses because they appear more detailed. ---- **Q3**: About VisDial. **A3**: We apologize for the ambiguity. VisDial is indeed not truly held-out, we will clarify this in the next version of our paper. However, it is nearly the only high-quality visual dialog dataset available for quantitative evaluation, so we still incorporated it in our evaluation. Additionally, for the other held-out datasets, there is no overlap with the held-in ones. ---- **Q4**: About ablation studies in Table 2. **A4**: The experiments in Table 1 and Table 2 are entirely independent, they do not influence each other. For Table 2, we conducted separate experiments to demonstrate the impact of using instruction-aware visual features and the balanced data sampling strategy. We've presented the results for only 5 datasets to maintain clarity in our paper's format, and these are sufficient to support our insights. ---- **Architecture** **Q1**: About multi-turn conversation. **A1**: As the model focuses on the question or utterance of the current turn, we only need to use 32 latent tokens for the visual features. Previous turns simply serve as the dialog context. ---- **Q2**: Is there any evidence that Qformer can handle long and complex instructions? **A2**: The capability of InstructBLIP depends on the data it has been trained with. Therefore, its capability of handling complex instructions is influenced by the amount of such data we utilize. ---- **Q3**: If "Q-former can simply ignore the instruction and extract general visual features", is 32 tokens enough? **A3**: When there is no useful information in the instruction, it typically represents a generic task, and 32 tokens are usually sufficient in such cases. Moreover, based on our preliminary experiments, increasing the number of query embeddings beyond 32, for instance to 48, does not yield any further improvement. We will include this discussion in the next version of our paper. ---- Thank you once again. We hope our reply adequately addresses your follow-up concerns.
Summary: This paper presents a study of instruction finetuning for vision-language tasks. The paper follows the design of FLAN for instruction tuning and borrows ideas from Flamingo for image/text model freezing and query network. Experimental results suggest FLAN-style instruction tuning also works for vision-language tasks, and this paper provides the first comprehensive study. Strengths: 1. To the best of my knowledge, this is the first work of FLAN-style instruction tuning in the VL domain. It provides a comprehensive analysis for future development in this field. 2. The experiment shows the effectiveness of the proposed instruction-aware query network, which is novel and insteresting. Weaknesses: 1. Overall, the experimental results are expected, as demonstrated by existing LLM papers. To that end, novelty is somewhat limited, as similar designs have been explored in LLM and Flamingo. 2. The model is called InstructBLIP yet they are based on T5 and Vicuna (LLaMA). This puzzled me for a second. I feel the key component (frozen LLM + query network) of the model is first shown work by Flamingo and thus find the naming to be a bit misleading. Perhaps a completely new name is more appropriate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing and providing valuable comments. We address your questions in the following. ---- **Q1**: Novelties of InstructBLIP. **A1**: Our paper proposes a novel vision-language instruction tuning framework that has not been previously explored. We delineate our novelties as follows. 1. **Vision-Language Instruction Tuning vs. Text-only Instruction Tuning.** Although instruction tuning has been applied to text-only LLMs, it has not been systematically investigated in vision-language LLMs. Vision-language (VL) instruction tuning is challenging, as the additional visual modality introduces a high level of variety to the input, making the model harder to generalize. We propose novel approaches to tackle challenges unique in VL instruction tuning, such as the instruction-aware visual feature extraction, which leads to non-trivial improvements and state-of-the-art performance. 2. **InstructBLIP vs. Flamingo.** InstructBLIP is drastically different from Flamingo. From an architectural perspective, InstructBLIP is based on the BLIP-2 backbone. The Q-former in InstructBLIP is different from the perceiver resampler in Flamingo. The Q-former in InstructBLIP has been pre-trained with vision-language representation learning, which enables our proposed instruction-aware visual feature extraction. We refer the reviewer to the BLIP-2 paper [1] for more details on the Q-former pre-training. From a model training perspective, InstructBLIP is also different from Flamingo. InstructBLIP is trained on a wide variety of vision-language instruction data, whereas Flamingo is pre-trained with image-caption pairs similar to BLIP-2. Our experiments validates the significant advantage of InstructBLIP over both Flamingo and BLIP-2. ---- **Q2**: Naming of InstructBLIP. **A2**: We name our model InstructBLIP because it is an instruction-tuned model based on the BLIP-2 [1] backbone. InstructBLIP is a generic vision-language instruction-tuning framework that is flexible to incorporate any LLMs. As discussed above, InstructBLIP is drastically different from Flamingo. ---- **References** [1] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models, Li et al., 2023.
Summary: The paper proposes InstructBLIP, a vision and language instruction tuning framework that enables general-purpose models to solve a wide range of visual language tasks through a unified natural language. It uses a diverse set of instruction data to train a multimodal LLM. The model is initialized with a pre-trained BLIP-2 model consisting of an image encoder, an LLM, and a Query Transformer to bridge the two. The LLM and Q-Former is kept frozen while fine-tuning the image encoder. The paper makes the following contributions – 1. A comprehensive and systematic study on vision-language instruction tuning. 2. It proposes instruction aware visual feature extraction, a novel mechanism that enables flexible and informative feature extraction according to the given instructions. 3. The InstructBLIP models are evaluated and open-sourced using two families of LLM’s: - FlanT5, and encode-decoder LLM finetuned from T5. - Vicunna, a decoder-only LLM finetuned from LLaMA. Strengths: Following are the strengths of the paper -- 1. The paper is well written and easy to follow. 2. The proposed framework is evaluated on a large variety of tasks and datasets and beats the SOTA. 3. The paper provides both the qualitative and quantitative evaluation for the models. Weaknesses: The paper has couple of weaknesses - 1. The human evaluation for the model is missing. 2. The model is evaluated on the static datasets which poses a question on its generalizability to the real world scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. A human evaluation for the model should be provided. Q2. A evaluation on the commercial datasets should also be provided to check it's generalizability in real world settings. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has couple of weaknesses - 1. The human evaluation for the model is missing. 2. The model is evaluated on the static datasets which poses a question on its generalizability to the real world scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and providing your insights. We value your feedback and have responded to your concerns as follows: ---- **Q1**: Human evaluation. **A1**: Our paper evaluates the InstructBLIP models on a wide range of well-established benchmarks, which sufficiently verifies the advantage of our vision-language instruction-tuning framework in generalizing to unseen tasks. Human evaluation, while potentially beneficial in certain applications, is not an essential consideration in this context. ---- **Q2**: Generalizability. **A2**: For a more comprehensive understanding of InstructBLIP's performance, we refer to three follow-up papers that provide further evaluations from different perspectives. [1] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension, Li et al., 2023. SEED-Bench evaluates Multimodal LLMs across 12 evaluation dimensions: Scene Understanding, Instance Identity, Instance Attributes, Instance Location, Instance Counting, Spatial Relations, Instance Interaction, Visual Reasoning, Text Recognition, Action Recognition, Action Prediction, and Procedure Understanding, utilizing 19K human-annotated data. Among the 18 evaluated models, InstructBLIP achieves the best average performance, significantly outperforming the other multimodal LLMs. [2] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models, Fu et al., 2023. This paper provides a comprehensive evaluation of 12 advanced Multimodal Large Language Models (MLLMs) across 14 perception and cognition tasks. InstructBLIP achieves top-2 performance in both the overall perception and the cognition test scores, and ranks among the top 3 in 10 out of 14 tasks. Specifically, InstructBLIP attains state-of-the-art performance on tasks including Existence, Count, Color, Scene, and Commonsense Reasoning. [3] Evaluating Object Hallucination in Large Vision-Language Models, Li et al., 2023. This paper evaluates large VL models for the hallucination problem, a common issue observed in modern LLMs. Among the five tested large VL models, InstructBLIP performs significantly better than the others, whether on existing benchmarks or on their proposed POPE pipeline, which incorporates a more real-world environment. For example, InstructBLIP achieves a 77.32 F1 score on the most challenging adversarial examples, while the second-best model achieves 70.42. We hope these supplementary evaluations provide a more well-rounded view of InstructBLIP's capabilities and can address the reviewer’s concern. --- Rebuttal 2: Title: Reminder to review the rebuttal Comment: Dear Reviewer prhX, As the discussion period approaches its end, we kindly remind you to review our rebuttal. If our responses address your concerns, would you please consider increasing your rating? Thank you! Best regards, Authors
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your insightful and constructive feedback. We deeply appreciate the time and effort you have dedicated to reviewing our work. We have responded to your comments and questions inside each individual review. We hope these responses provide a more comprehensive view of our paper. Please kindly consider increasing your rating if your concerns have been addressed.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes InstructBLIP, which is built on BLIP-2 and further perform instruction tuning to enable the instruction following ability of BLIP-2 models. The InstructBLIP model is trained on a set of template-based converted instruction-following data for different tasks (e.g. image captioning and VQA) and LLM-generated instruction-following data (e.g. LLaVA-Instruct-150K). The model delivers better zero-shot performance on selected datasets compared with BLIP-2 and Flamingo. Strengths: 1. The paper is clear and well-organized. The motivations, techinical settings and details are clearly illustrated. 2. The proposed InstructBLIP model serves as a sound contribution to the research community of general-purpose large multi-modal models. 3. This paper provides insightful analysis on the efficacy and generalization ability of instruction tuning in the ablation studies. Weaknesses: 1. Most of the instruction-following data are converted from existing image-text datasets in a template-based fasion, except for the LLaVA-Instruct-150K dataset which was composed of GPT generated contents. Simply converting existing image-text datasets with some handcrafted templates may result in lack of diversity of instructions. 2. The selected tasks and datasets do not include some mainstream image and image-text datasets such as ImageNet and CIFAR for image classification. 3. The proposed framework is only applicable to image-level tasks and still cannot handle object-level tasks such as referring object detection / region description. Meanwhile, the proposed framework cannot handle image generation tasks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The authors provide per-task detailed zero-shot comparisons on held-out datasets. How about the performance on each held-in task? Meanwhile, for the held-in tasks, they can be compared fairly with previous jointly trained multitask generalist methods such as [1-3]. How is the performance compared with these methods? [1] Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks. In ICLR 2023. [2] Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks. In CVPR 2023. [3] A Unified Sequence Interface for Vision Tasks. In NeurIPS 2022. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: As discussed in the weakness section, the proposed framework is only applicable to image-level tasks and still cannot handle object-level tasks such as referring object detection / region description. Meanwhile, the proposed framework cannot handle image generation tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you a lot for your insightful review, we appreciate it a lot. Our response to your comments is as follows: ---- **Q1**: Diversity of instructions. **A1**: Converting from existing human-annotated datasets provides high-quality instruction-following data. The data has a reasonable level of diversity due to three reasons: 1. The use of a wide range of datasets across 11 different tasks; 2. The diversity within each dataset as a result of the human annotation procedure (e.g., in VQA, different annotators have different styles of asking questions); 3. The use of 10 unique templates for each task. Since it is difficult to quantify the level of diversity, we directly verify the effectiveness of our instruction-following data by evaluating the InstructBLIP models’ zero-shot performance on unseen datasets. ---- **Q2**: Image classification datasets. **A2**: We do not include these image classification datasets because our primary focus is on vision-language tasks that involve both language reasoning and visual perception. However, follow-up work [1] has tested InstructBLIP on image classification (ImageNet-1K, CIFAR-10, Pets37, and Flowers102), object counting, and Multi-class Identification. InstructBLIP achieves the highest average score (0.928) among 8 large vision-language models, with a large gap ahead the second best (0.858). ---- **Q3**: Object-level tasks and image generation tasks. **A3**: InstructBLIP focuses on image- and video-level vision-language tasks, showing state-of-the-art performance on a wide range of well-established benchmarks. It is a generic framework that could be extended to object-level vision-language tasks. For example, [2] evaluates InstructBLIP on a series of instance-level tasks and InstructBLIP achieves the best performance among 18 vision-language models. Image generation is not the main focus of our paper. However, our learned multimodal representation can potentially benefit image generation tasks, as shown by [3]. ---- **Q4**: Performance on held-in tasks. **A4**: The main goal of instruction tuning is to enhance the model's generalization ability to unseen tasks. As such, our primary focus is on held-out evaluations. For held-in evaluations, the average scores are provided in Section 3.4, and the finetuning results are shown in Section 3.5. When compared to previous multitask methods, InstructBLIP generally achieves better performance on held-in datasets. For instance, InstructBLIP attains 62.1% accuracy on OKVQA, while UNIFIED-IO XL reaches 54.0%. On COCO Caption, InstructBLIP achieves 142.6 CIDEr, whereas Uni-Perceiver v2 (large) reports 122.5. We will include more detailed held-in results in the Appendix of the next version of our paper. ---- **References** [1] LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models, Xu et al., 2023. [2] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension, Li et al., 2023. [3] Planting a SEED of Vision in Large Language Model, Ge et al., 2023. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank the authors for their response. I maintain my score as "strong accept".
null
null
null
null
null
null
Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning
Accept (spotlight)
Summary: Authors propose a framework for offline-to-online tuning of offline RL algorithms tunning. The idea is to train an additional network which desides helps to keep improvement-constraint balance during finetuning. Strengths: Approach improves all of the checked algorithms performance and can be applied to different offline RL algorithms. Good range of benchmarking tasks and algorithms. Weaknesses: - Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does the modification affect the compute time required to train algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Adding another network requires additional choice of hyperparameters which might be hard to find Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your keen observations and valuable suggestions are highly appreciated, and we thank you for helping us to strengthen our paper. > Q1: How does the modification affect the compute time required to train algorithms? Thank you for raising this question. To examine the computational overhead introduced by FamO2O, we utilized IQL and FamO2O+IQL as examples, assessing their training time using their JAX implementations over 1M offline steps and 1M online steps. The result is provided in the table below. From this analysis, we can discern that FamO2O augments the training time of IQL by roughly 8 minutes. **Given the substantial performance improvement brought by FamO2O, this increase in training time can be deemed acceptable.** | (unit: minute) | IQL | FamO2O+IQL | | -------------------------- | ------ | ---------- | | offline pre-training phase | 9.278 | 11.48 | | online fine-tuning phase | 12.532 | 18.25 | | total | 21.81 | 29.73 | > Q2: Adding another network requires additional choice of hyperparameters which might be hard to find. Thank you for your insightful observation. We do recognize the complexity that adding another network introduces due to the need to choose additional hyperparameters, such as the selection of the space of balance coefficients $\mathcal{B}$. Nevertheless, our empirical results, as presented in Figure 11 and Figure 12 of our paper, demonstrate that **FamO2O's performance is largely unaffected by the choice of hyperparameters, e.g., $\mathcal{B}$, within a sensible range**. --- We extend our heartfelt gratitude once more for your thorough review and thoughtful comments. We anticipate further dialogue and collaboration, and are open to any more thoughts you may have to help refine our work. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. For hyperparameters sensitivity demonstration you can consider using EOP https://arxiv.org/abs/2110.04156. I have no further questions on your work and will increase my confidence score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your thoughtful comments and advice. We have carefully examined the EOP paper and will take it into consideration for inclusion in the next version of our work. Your insights are greatly appreciated.
Summary: This paper approaches offline-to-online RL from the intuition that at a particular state, if the dataset already contains good actions, then the subsequent online tuning should be more conservative to retain the good actions in the dataset; but if the dataset's actions is poor, then more radical policy improvement is needed. To this end, this paper introduces a framework, Family Offline-to-Online RL (FamO2O), which aims at a state-adaptive improvement-constraint balance for each state. Specifically, from the collected dataset, the authors train a diverse policy family ranging from conservative to radical and use the environmental feedback to select an appropriate policy from this family at each state. Practically, this is achieved by a universal model, which determines the degree of policy conservatism; and a balance model, which learns the balance coefficients at each state. Experimental results show that FamO2O improves on various offline-to-online RL methods and achieves competitive performance. Strengths: 1. The paper is well-written and generally easy-to-follow. 2. The proposed method is well theoretically justified. 3. The empirical discussion and ablation study are thorough. Weaknesses: 1. The proposed method seems require abundant diverse data, which may not be feasible on harder settings, e.g., the Adroit domain in the D4RL benchmark where the data is limited and the data distribution is narrow and lack of diversity. 2. Maybe I misunderstand something. I think the purpose of the $\pi_b(s)$ model is to find the $\beta_s$ that corresponds to the optimal sequence of constraints $\\{\epsilon_s, s \in \mathcal{S}\\}$ in Eqn. (6), which relates to $\beta_s$ by the Language multipliers $\mu(s)$ (Line 537, Appendix C.2). In this regard, the paper's story of balancing policy improvement and constraint can be slightly over-complicated and somewhat confusing. 3. The proposed method is in spirit similar to decision-transformer-style methods, and is therefore less a surprise. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you explain more on how the updating rule Eqn. (10) train the balance model $\pi_b$ to control policy improvement and constraint? Why does maximizing the $Q^k$ values achieve this balance? 2. The significant variation in data quality across different states depicted by Figure 2 may come from the nature of the medium-expert datasets, which by construction contains medium and expert trajectories. Does this variation in data quality exist in other type of datasets, say expert, medium, or medium replay? More generally, is this phenomenon of significant data-quality variation ubiquitous or specific? 3. Could you explain more on the benefit of "state-adaptive improvement-constraint balances"? Even if we do not make such a balance and simply just online finetune an offline-pretrained policy, the environmental feedback should be able to tell us which actions are good and should be retained and which actions are poor and should be improved. 4. Could you explain more why you consider the term $\log \pi (a|s)$ in Eqn. (1) as a policy constraint? What is the target of this constraint, especially when $(s,a)$ is an online interaction samples? 5. Could you explain more on how you get Eqn. (12) and (13) from Eqn. (5)? It would be better if you can expand Appendix C.1 to include more details and explanations. The current version is a bit hard to follow. 6. nit: in Eqn. (6), are you missing a "$\forall \epsilon > 0$" before $\exists \\{...\\}$? 7. [L161] Could you explain more on the cooperation between $\pi_u$ and $\pi_b$? Why couldn't we still randomly sample $\beta_s$ during online fine-tuning? 8. [L159] How many $\beta_s$ vectors are required to learn the universal model? How does this number scale with the number of states in the dataset? And how to select/design the balance coefficient space $\mathcal{B}$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to express our thanks to you for the detailed feedback and constructive criticism that guided our revisions. > Q1: The proposed method seems to require abundant diverse data. Thanks for pointing this out. The confusion might stem from an unclear explanation in our manuscript, and we'll clarify it in the paper's next version: It's worth noting that **our method doesn't necessarily need varied data qualities**. We claim that it can determine the best conservative/radical balance for each state in online scenarios based on the data quality in the collected dataset. **If the dataset is diverse in quality, the balances will be diverse; if the quality is consistent, the balances will be correspondingly consistent** (please refer to Figure 3 in [the global rebuttal's attached PDF](https://openreview.net/attachment?id=1S5GhI6UFd&name=pdf) for empirical evidence). Even with consistent data quality, our method has **the advantage of adaptively finding the proper balance** over existing offline RL algorithms with human-chosen balances. --- > Q2: The paper's story of balancing policy improvement and constraint can be slightly over-complicated and somewhat confusing. Thank you for your question, and apologies for any confusion. Here's the logic behind our method: - Our method aims to balance policy improvement and constraint, equating to different upper bounds $\epsilon_{\mathbf{s}}$ (see Equation 3). - Unlike AWAC or IQL, which use a single constraint term (Equation 5), different $\epsilon_{\mathbf{s}}$ create $|\mathcal{S}|$ constraints (Equation 4), requiring $|\mathcal{S}|$ Lagrange multipliers. - Given $\beta = d_{\pi_\beta}(\mathbf{s})/\mu$ (Line 537), we must have $|\mathcal{S}|$ different $\beta$ values, hence the necessity for state-adaptive balance (Proposition 3.5). - To generate these $\beta$ values, we introduce $\pi_b: \mathcal{S}\mapsto \mathbb{R}$, outputting $\beta$ based on $\mathbf{s}$. We will add this explanation to the next version of our paper. --- > Q3: The proposed method is in spirit similar to decision-transformer-style methods. Thank you for pointing out the similarity. **Please refer to [our response to Reviewer APCr's Q3](https://openreview.net/forum?id=vtoY8qJjTR&noteId=EQ1sNpef2S)**, where we compare our method with other works **including decision transformer [9]**. --- > Q4: How does the updating rule Eqn. (10) train the balance model $\pi_b$ to control policy improvement and constraint? Why does maximizing the $Q^k$ values achieve this balance? Thank you for your question. In Equation (9), the universal model $\pi_u$ is trained to adjust the conservative/radical balance of the policy based on the inputted balance coefficient $\beta_s$. Building on this, Equation (10) trains the balance model to select the $\beta_s$ that enables $\pi_u$ to maximize the Q value. This approach is grounded in the understanding that **the Q value serves as an estimate of future return, which is our ultimate goal of striking a balance between policy improvement and constraint**. This is in alignment with Equation (2), where it's important to note that $V(s)$ has no gradients concerning actions, and consequently, no gradients with respect to $\pi_b$. We will include this clarification in the next version of our paper. --- > Q5: Is the phenomenon of data-quality variation ubiquitous or specific? As depicted in **Figure 1 of [the global rebuttal's PDF](https://openreview.net/attachment?id=1S5GhI6UFd&name=pdf)**, the phenomenon of substantial variation in data quality is a **widespread occurrence** in offline datasets. It's worth reiterating, as previously noted in response to your Q1, that while our method can harness data across a range of qualities, **it does not fundamentally depend on data quality diversity**. --- > Q6: Could you explain more on the benefit of "state-adaptive improvement-constraint balances"? Indeed, as you suggest, online feedback can help the agent determine which actions are good and which are bad even without our state-adaptive improvement-constraint balances. However, **the existing fixed balance methods present two primary problems, which can be addressed by the implementation of state-adaptive balances**: 1. **Difficulty in Deciding Proper Balance:** In the absence of abundant information about an offline dataset and online environment, during offline pre-training, it's **difficult to pre-specify a proper improvement-constraint balance** that can optimally deal with every future state encountered during online interaction. 2. **Inflexibility of Fixed Balances:** As indicated by [3], existing offline-to-online algorithms with fixed balances inhibit drastic changes in an agent's behavior (e.g., from conservative to radical, or vice versa) due to "primacy bias." In contrast, our approach facilitates flexible adjustment of conservative or radical degrees for different states during online inference, as stated in our paper. **Without our proposed state-adaptive balances, the algorithms would struggle to adapt the conservative/radical degrees appropriately for varying states**. --- **Because of the character limit and our desire to respond to your questions thoughtfully, we've placed the remaining rebuttal in the [global rebuttal](https://openreview.net/forum?id=vtoY8qJjTR&noteId=1S5GhI6UFd). Please refer to it for the follow-up.** --- Again, we thank you for your invaluable insights and support in enhancing our work. We are eager to engage in further discussions or address any additional concerns to continue improving our manuscript. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Dear authors, Thank you so much for the detailed response, which clears out all my questions. I will increase my rating to 7. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer urL3, Thank you for your thoughtful review and for taking the time to reconsider our work. We appreciate your positive feedback and the increased rating. Your insights have been invaluable to us, and we are pleased to have addressed your concerns. Best regards.
Summary: The paper proposes a new algorithm to perform offline-to-online reinforcement learning. The core idea is to consider a state-adaptive balance parameter, which aims to encourage imitation of dataset behavior only if the corresponding advantages / values are high, while prior works have mostly assumed fixed balances. The authors provide detailed experimental evaluation of their approach and show superiority over relevant baselines. Strengths: I think offline-to-online RL is a very relevant and promising direction for research since if reliable it would enable much more practical applications of RL in real-world tasks. The core idea of the paper, training a collection of policies that adapts on a per state level, appears logical and powerful (however please see [2,3,4], which I think put forward similar ideas) and will in the future probably significantly influence the way offline-to-online RL is performed and thought about. The paper is overall very well written and understandable & offers a strong statistical analysis of the proposed algorithms performance compared to relevant baselines. Weaknesses: The term data quality is frequently used but not really introduced (could mean accuracy / truth of the information in the data, but I think it means something like return). Not all D4RL locomotion datasets were used - its unclear whether that's an issue since the selection is not justified. The considered baselines appear relevant (TD3+BC & CQL less so since its already shown in the AWAC paper that using the same conservative formulations in the online setting does not work well), however I think the closest existing methods are missing: E.g. Confidence-conditioned value functions [1] automatically adjusts its policy confidence level (as far as I understand on a per state level since every state goes into the considered history). Also methods like RvS [4], which condition on return, could easily be extended to this setting (i.e. always try to maximise conditioned return during online data collection). Generally I have the feeling that some very relevant related work regarding offline adaptive policies was not considered, i.e. also: [2] User-Interactive Offline RL (which considers policies conditioned on a balance between conservative and very liberal) [3] offline policies should be trained to be adaptive (which adapts policies based on the online collected history) I'm not sure the state-adaptive balance between conservative and radical idea for offline-to-online is thus entirely novel. I see some issues regarding clarity that could easily be fixed (see questions) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you please define how you use the term data quality (I'm assuming it has something to do with the data's return, but I'm not sure)? Please elaborate why the random datasets in the D4RL datasets were not considered. There exist offline-to-online methods that already consider something like state-adaptive policy changes, like confidence-conditioned value functions [1], as well as other adaptive offline-to-online policy concepts [2,3] that could easily be extended to your case - why were they not considered? (I realise that some are rather recent, but I think they deserve at least brief discussion) I do not understand section 6.1 / figure 8 at all - what is meant by guidance? what is meant by high / low quality data? Starting positions are denoted as triangles, suggesting a moving direction but I don't think that's intended. The text says high balance coefficients are found where data quality is also high (quality=reward / return?) & that higher quality data is found at the lower crossing point in the 5th row, however from the color encoding it seems that the balance coefficient there is actually the lowest... I think the ratio between offline and online training steps as well as collected interaction steps is a crucial parameter for reproducibility / future comparisons, however that information is only conveyed in the figure 10 axis & the appendix - I think it should be explicitly stated somewhere in the experimental section text. When I look at figure 10, it seems that after all online training steps, the attained performance is almost identical to that at the end of offline training - isn't that a very disappointing result and couldn't you just throw away the whole online part altogether then? Since FamO2O outperforms prior offline-to-online baselines, does that mean that these baselines only get worse with online training? [1] Hong, J., Kumar, A., & Levine, S. (2022). Confidence-Conditioned Value Functions for Offline Reinforcement Learning. ICLR 2023 [2] Swazinna, P., Udluft, S., & Runkler, T. (2022). User-Interactive Offline Reinforcement Learning. ICLR 2023 [3] Ghosh, D., Ajay, A., Agrawal, P., & Levine, S. (2022, June). Offline rl policies should be trained to be adaptive. ICML 2022 [4] Emmons, S., Eysenbach, B., Kostrikov, I., & Levine, S. (2021). RvS: What is Essential for Offline RL via Supervised Learning?. arXiv preprint arXiv:2112.10751. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: no, limitations are not discussed - perhaps looking at the not improved performance after online fine-tuning in figure 10, as well as the often only small improvements over standard IQL, a brief discussion would be good. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your thoughtful comments and critiques are sincerely appreciated and have been instrumental in refining our study. **Due to the character limit, the references in this rebuttal are at the global rebuttal.** Thanks for your time and effort. > Q1: Definition of the term data quality? Thank you for pointing this out. Your assumption that **the "data quality" refers to the return** is correct. Although we have indeed implicitly mentioned this in Figure 2 and Lines 44-48 of our manuscript, we will make sure to explicitly define the term in the next version of our paper. --- > Q2: Elaborate on why the random datasets in the D4RL datasets were not considered. Thank you for your question. Our approach mainly **follows the practice in IQL paper**, and many other works (e.g., [1,2]) also exclude tests on random datasets. **The primary reason is the RL agent's primacy bias** [3], leading to overfitting on poor-quality random datasets, resulting in performance that often does not exceed that of directly training online for the same number of gradient steps. To confirm this, we tested IQL with offline-to-online learning on `hopper-random-v2` and compared it with SAC's online learning performance. The results, as shown in the table below, demonstrate that IQL's performance on random datasets is significantly lower than SAC's. Hence, using random datasets for offline-to-online RL is considered to be of limited practical value in our context. | | Normalized Score Mean | Normalized Score Std | | ---------------------------------------------------------- | --------------------- | -------------------- | | IQL (offline-to-online, pre-trained on `hopper-random-v2`) | 40.02667 | 38.79444 | | SAC (online learning) | 86.86755 | 21.07426 | --- > Q3: Other related works [5,6,7,8] Thank you for pointing out these related works. Our FamO2O method emphasizes two main features: 1) Utilizing various data, similar to Balanced Replay [4], and 2) employing a conditioned policy, related to the papers mentioned by you [5,6,7,8] and by Reviewer urL3 [9]. Having discussed Balanced Replay extensively, we will now focus on the conditioned policy's related works: 1. **[5]:** They focus on diverse conservative **Q-values**, while we prioritize varying conservative **policies**. Their method **isn't suited for continuous action settings** like D4RL, but ours is versatile for both continuous and discrete actions. 2. **[6]:** Their setting relies on **user interaction** for online adjustments, whereas ours is **automatic**. 3. **[7]:** This paper emphasizes offline RL, estimating probabilities of various MDPs in uncovered areas with Bayesian posterior, and adjusting policies accordingly. However, in the offline-to-online RL context, where agents explore during online fine-tuning, **uncoverage may not be a primary concern**. Their focus is on **adapting to different MDPs**, while ours targets **utilizing data of varying quality**. 4. **[8, 9]:** Despite sharing conditioned policy use (as discussed in our related work section), they aim to liberalize policy learning from merely copying offline behavior, **differing significantly from our motivation**, and their strategies are **not adaptively adjustable**. We value your insight and will include this discussion in the related work section. --- > Q4: Queries on guidance, high/low-quality data, starting positions & triangles, and texts in L258-262 Thank you for your comments and questions regarding Section 6.1 and Figure 8. We apologize for any confusion caused and provide explanations here: 1. **Guidance**: Guidance refers to directing the agent to the shortest route across a crossing point during offline data collection. Without it, the agent moves randomly. 2. **High/Low-Quality Data**: High-quality data results from the agent moving w/ guidance, while low-quality data is collected when the agent moves w/o guidance. 3. **Starting Positions & Triangles**: The triangles don't represent moving direction. We've revised the figure to avoid confusion (see Figure 2 in the global rebuttal's PDF). 4. **Texts in L258-262**: There were typos in lines L258-262. The corrected statement is: "Figure 8(b) shows the agent typically outputs **lower** balance coefficients for **high-quality** samples and **higher** ones for **low-quality** data." This aligns with the FamO2O motivation, as lower/higher balances reflect a more conservative/radical policy. We will add the above clarifications and modifications to the next version. --- > Q5: Explicitly state crucial parameters in the experimental section. Thanks for your constructive suggestions. We will state the crucial parameters, e.g., offline and online training steps, and collected interaction steps, in the experimental section in the next version. --- > Q6: The almost identical performances after online fine-finetuning and offline pre-training in Figure 10. Thank you for your question. Figure 10 specifically highlights **an extreme case** on a single dataset, where IQL shows **the most significant performance drop**. It demonstrates that even in this situation, IQL+FamO2O can alleviate the drop and attain good performance. However, this is an isolated instance, and generally, **as shown in the table below, both IQL and IQL+FamO2O achieve better performances after online fine-tuning compared to after offline pre-training**. | | Offline Performance Sum | Online Performance Sum | Fine-tuning Improvement | | ---------- | ---------- | ------ | ----------------------- | | IQL | 581.7| 718.3| +136.6| | FamO2O+IQL | 584.4| 772.0| +187.6| --- > Q7: Limitations are not discussed. Thanks for your advice. We will discuss limitations in the revised version. --- Thank you once again for your valuable feedback and advice. We look forward to further discussion to refine our work. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thank you very much for the detailed responses to my questions, they have been very helpful in better understanding your work, especially Q3,4 & 6. I am however still not quite sure I understand Figure 8 (Q4): When you say guidance means "directing the agent", how exactly do you direct it? Is there an explicit reward signal given only at this point or does a separate policy take over which "knows" the way or ...? I believe illustrative examples like this one are important & I understand you have limited space, but I think a little more information is needed to make it really helpful. If the color encoding is correct & what you wrote in your response > the agent typically outputs lower balance coefficients for high-quality samples is correct, I might still misunderstand your method. In Eq(1) it seems to me that high balance coefficients would lead to the agent more likely copying the behaviour that was present in the dataset - since I would like to repeat behaviour that has yielded high return, I would expect high balance coefficients in states where you have high quality data. In the example it is however the other way around... Could you please elaborate? One more clarification regarding your answer on Q6: Does that mean the plot shows performance only on a single offline dataset? If so, which one? --- Reply to Comment 1.1.1: Title: Explanations of Figure 8 (Q4), Eq. (1), and Figure 10 (Q6) Comment: Thank you for your thought-provoking questions. We've addressed your inquiries in detail below. > The meaning of "directing the agent". "Directing the agent" refers to compelling the agent to adhere to the route and direction that yield the shortest path to the goal, rather than letting the agent decide the route and direction on its own. This aligns with your perception that "a separate policy take over which 'knows' the way". Thank you for bringing this to our attention. We will incorporate the above explanation into the next version of our paper to make this point clearer. > The effect of the balance coefficient value. Thank you for your thoughtful question. You're suggesting that high balance coefficients should be used with high-quality data, which makes sense at first glance. But our method works differently, and here's how: - **For High-Quality Data:** Utilizing high balance coefficients might lead the policy to aggressively pursue actions with the highest possible advantage, $Q(\mathbf{s}, \mathbf{a})-V(\mathbf{s})$. **But since the advantages of the high-quality data are already high, trying to push for even higher advantages can easily lead to mistakes due to overestimation in Q values**. So, we use lower balance coefficients for high-quality data, making sure the policy stays safe by following the known good actions. - **For Low-Quality Data:** On the other hand, with low-quality actions, it makes sense to use higher balance coefficients. **Copying what the low-quality data does will surely end up in failure, so it's worth the risk to try for something better**. This leads to a more daring or "radical" policy that looks for higher-quality actions. By using balance coefficients this way, depending on whether the data is high or low quality, our method reduces the risks and finds a good middle ground. It doesn't chase after the highest advantages in a way that can cause mistakes, but it also doesn't just copy what's in the bad data. It's a careful balance that helps the policy make the best decisions. > One more clarification regarding your answer on Q6: Does that mean the plot shows performance only on a single offline dataset? If so, which one? Yes, Figure 10 displays the performance solely on one offline dataset, namely `antmaze-umaze-diverse`, as referenced on line L297 of our manuscript. We selected this specific dataset because, on it, IQL exhibits the most significant decline in performance when transitioning from offline pre-training to online fine-tuning. Although we've alluded to this rationale on lines L296-297 of our manuscript, we will articulate it more explicitly in the upcoming version of our paper.
Summary: The paper introduces a new method to mitigate the distribution shift problem in the offline to online RL problem. The paper states the intuition that the policy should behave differently on states with different values, that is, the policy should be more conservative on high return states and exploratory on the low return states. With this intuition, the paper proposed to train a family of policies in the offline to online setting, specifically, train another "policy" to parameterize the rollout policy via the state-adaptive balance coefficient. The experiments show that FamO2O outperforms previous O2O baselines, and show via a toy experiment that FamO2O indeed leans the state-wise adaptivity, and various ablations show the importance of each design choices. Strengths: 1. The experiment result is solid as it evaluates on extensive D4RL dataset with different data quality and both locomotion and maze tasks. 2. The discussion sections validate the algorithm design choices, and section 6.1 verify that the algorithm indeed learns a state-wise adaptive policy, which support the intuition and motivation of the algorithm. Weaknesses: 1. The exploitation vs. exploration intuition is not brand new in the offline to online setting, there is also some work with similar intuition [1]. I believe proper comparison is required given the similarity of the intuition, although I believe training a family of policies (or conditionally parametrized the policy) seems like a slight generalization. 2. It is confusing that in the universal model training (eq. (9)), the exponential of the advantages are weighted by the balance parameter $\beta$, but when training the balance model, the Q-function (as in the loss) is not weighted by the balance parameter. There seems to be some consistency. I can tell that the unweighted objective (the current form of eq (10)) would be more computationally friendly, but theoretically, it seems more natural to optimize over a weighted version where eq. 10 is also weighted by $\beta, which sample from the balance model. 3. The action distance in eq. (11) may not be the best metric to measure the discrepancy between a policy and a trajectory. For example, if the policy that induces the trajectory only takes $a_1$ in $s$, and the evaluated policy takes $a_1$ with $p=0.51$ and $a_2$ with $p=0.49$, which may induce a very negative reward, or cause great trajectory derailment (which is not recorded in the offline trajectory), the proposed metric will still be 0 but in reality the evaluated policy is not that close to the trajectory. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and constructive suggestions, which have helped us improve our manuscript. > Q1: The exploitation vs. exploration intuition is not brand new in the offline to online setting, there is also some work with similar intuition [1]. I believe proper comparison is required given the similarity of the intuition, although I believe training a family of policies (or conditionally parametrized the policy) seems like a slight generalization. Thank you for bringing up the important issue of exploitation vs. exploration intuition and the comparison to previous work. **Regrettably, the reference [1] was not provided in your comments, so we are unable to make a direct comparison to the specific work you are referring to.** If you could kindly provide the missing reference, we will be more than happy to provide a detailed response and comparison. In addition, we would like to highlight that Reviewer APCr has pointed out four references that bear similarity to the ideas in our paper. We have conducted a thorough comparison with those references and addressed the similarities and differences in [our response to Reviewer APCr's Q3](https://openreview.net/forum?id=vtoY8qJjTR&noteId=EQ1sNpef2S). --- > Q2: It is confusing that in the universal model training (eq. (9)), the exponential of the advantages are weighted by the balance parameter $\beta$, but when training the balance model, the Q-function (as in the loss) is not weighted by the balance parameter. There seems to be some consistency. I can tell that the unweighted objective (the current form of eq (10)) would be more computationally friendly, but theoretically, it seems more natural to optimize over a weighted version where eq. 10 is also weighted by $\beta$, which sample from the balance model. Thank you for your observations on Equations (9) and (10). We acknowledge your suggestion to weight Q with the balance coefficient in Equation (10). We will explore the effects of this weighting in Equation (10) in two cases: - If Q were weighted by $\beta_{\mathbf{s}} \sim \pi_b$ **without stopping the gradient** (see the equation below), training $\pi_b$ would aim to find the $\beta_{\mathbf{s}}$ that maximizes the Q-value and also increase $\beta_{\mathbf{s}}$, the output of $\pi_b$. The latter is undesired, as FamO2O's objective is to find the proper $\beta_{\mathbf{s}}$ for each state $\mathbf{s}$, not to pursue larger $\beta_{\mathbf{s}}$ leading to an aggressive policy. $\qquad\pi_b^{k+1}=\underset{\pi_b}{\arg \max}$ $\mathbb{E}_{(s, a)\sim\mathcal{D}}[$ $\qquad\quad{\color{red}\beta_{s}} Q^k(\mathbf{s}, \pi_u^{k+1}(\mathbf{s}, \beta_{\mathbf{s}}))], \quad\text{where}\quad \beta_{\mathbf{s}}\sim\pi_b(\mathbf{s}).$ - If **the gradient were stopped** (see the equation below), the more aggressive balance could result in larger $\beta_{\mathbf{s}}$, skewing the Q-value. This would focus $\pi_b$ on radical policies, leading to possible extrapolation error. $\qquad\pi_b^{k+1}=\underset{\pi_b}{\arg \max}$ $\mathbb{E}_{(s, a)\sim\mathcal{D}}[$ $\qquad\quad{\color{red}\operatorname{stopgrad}(\beta_{s})} Q^k(\mathbf{s}, \pi_u^{k+1}(\mathbf{s}, \beta_{\mathbf{s}}))], \quad\text{where}\quad \beta_{\mathbf{s}}\sim\pi_b(\mathbf{s}).$ In summary, the unweighted formulation in Equation (10) is intentionally chosen to let $\pi_b$ find the proper balance between improvement and constraint for maximum returns. We hope this clarifies our design decision. --- > Q3: The action distance in eq. (11) may not be the best metric to measure the discrepancy between a policy and a trajectory. For example, if the policy that induces the trajectory only takes $a_1$ in $s$, and the evaluated policy takes $a_1$ with $p=0.51$ and $a_2$ with $p=0.49$, which may induce a very negative reward, or cause great trajectory derailment (which is not recorded in the offline trajectory), the proposed metric will still be 0 but in reality the evaluated policy is not that close to the trajectory. Thank you for your insightful observation regarding the action distance in Equation (11). We understand the scenario you described. However, our specific formulation for action distance is defined as $d_{\text{action}}^{\pi,\tau} = \mathbb{E}_{(s, a)\sim\tau}[|| \underset{a'}{\arg\max}\pi(a'|s)-a||_2^2]$. Since the environments in D4RL are continuous action spaces, **$\pi$ outputs a normal distribution**, and thus $\arg\max_{a'}\pi(a'|s)=\mathbb{E}_{a'}[\pi(a'|s)]$. With this premise, **the action distance can provide an accurate measure of the distance between the center of the distribution and the actions within the dataset**, and it should adequately reflect the discrepancy between a policy and a trajectory in the context of our work. --- Once again, we express our gratitude for your expertise and careful consideration. We remain open to further dialogue and are eager to address any more questions or concerns you may have. --- Rebuttal Comment 1.1: Title: Response Comment: I appreciate the authors' detailed response and my concerns (2 & 3) are addressed, and I increased my score accordingly. I also apologize for not specifying the reference, and if I recall correctly, [1] should be [1] Zhang, Haichao, We Xu, and Haonan Yu. "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning." arXiv preprint arXiv:2302.00935 (2023). --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your considerate review and for identifying an important reference. Concerning [1], which details the method of alternating between offline policy $\pi_\beta$ and online policy $\pi_\theta$ (starting from scratch), it indeed stands as a significant and relevant work to our research. We appreciate your recommendation, and we commit to including a comparison with this work in the revised version of our paper. Your insight and guidance are gratefully acknowledged, and we thank you once again for your constructive feedback.
Rebuttal 1: Rebuttal: ### References for All Reviewers: **Dear reviewers, due to the rebuttals' character limit, we've placed the references for all rebuttals below. Thank you for your time and consideration.** [1] Luo, Yicheng, et al. "Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions." *arXiv preprint arXiv:2303.17396* (2023). [2] Piche, Alexandre, et al. "Implicit Offline Reinforcement Learning via Supervised Learning." *arXiv preprint arXiv:2210.12272* (2022). [3] Nikishin, Evgenii, et al. "The primacy bias in deep reinforcement learning." *International conference on machine learning*. PMLR, 2022. [4] Lee, Seunghyun. "Offline-to-online reinforcement learning via balanced experience replay and pessimistic Q-ensemble." (2021). [5] Hong, Joey, Aviral Kumar, and Sergey Levine. "Confidence-conditioned value functions for offline reinforcement learning." *arXiv preprint arXiv:2212.04607* (2022). [6] Swazinna, Phillip, Steffen Udluft, and Thomas Runkler. "User-Interactive Offline Reinforcement Learning." *arXiv preprint arXiv:2205.10629* (2022). [7] Ghosh, Dibya, et al. "Offline rl policies should be trained to be adaptive." *International Conference on Machine Learning*. PMLR, 2022. [8] Emmons, Scott, et al. "Rvs: What is essential for offline rl via supervised learning?." *arXiv preprint arXiv:2112.10751* (2021). [9] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." *Advances in neural information processing systems* 34 (2021): 15084-15097. [10] Nair, Ashvin, et al. "Awac: Accelerating online reinforcement learning with offline datasets." arXiv preprint arXiv:2006.09359 (2020). --- ### Follow-up Rebuttal for Reviewer urL3: > Q7: Explain why you consider the term $\log\pi(a|s)$ in Eqn. (1) as a policy constraint. What is the target of this constraint, especially when $(s, a)$ is an online interaction sample? **Explanation on policy constraint.** If we remove the policy improvement term in Equation 1, the Equation 1 will become $L_\pi = \mathbb{E}_{(\mathbf{s}, \mathbf{a})\sim\mathcal{D}}[\exp(\beta)\cdot \log\pi(\mathbf{a}|\mathbf{s})]$. As $\exp(\beta)$ is a constant that can be absorbed in the learning rate, the equation can be further simplified into $L_\pi = \mathbb{E}_{(\mathbf{s}, \mathbf{a})\sim\mathcal{D}}[\log\pi(\mathbf{a}|\mathbf{s})]$, which is behavior cloning that maximizes the log-likelihood of the action $\mathbf{a}$ under the state $\mathbf{s}$. Therefore, we term $\log\pi(\mathbf{a}|\mathbf{s})$ as a policy constraint due to it forcing the behavior of policy $\pi$ to be close to that of the collected dataset $\mathcal{D}$. **Target of policy constraint.** The constraint stops excessive updates and exploration of unknown areas, particularly during the switch from offline pre-training to online fine-tuning, where it stabilizes training and prevents a performance decline (analyzed by AWAC [10]). **Effect of policy constraint for online samples.** As new online interaction samples are added to dataset $\mathcal{D}$, they may be part of the sampled data, ensuring that the behavior policy, to which policy $\pi$ is closely constrained, gradually aligns with the online state-action distribution (also analyzed by AWAC [10]). > Q8: Explain more on how to get Eqn. (12) and (13) from Eqn. (5)? Expand Appendix C.1 to include more details. Thanks for your advice, and we will include more details in Appendix C.1 in the next version. Due to the character limits, please refer to the attached PDF of this global rebuttal for an explanation of how to get Eq. (12), (13) from Eq. (5). > Q9: Miss a "$∀\epsilon\ge0$" before $\exists\{\cdots\}$ in Eqn. (6)? Thank you for pointing this out. We will add "$\forall \epsilon \ge 0$" to Eqn. (6) in the revised version. > Q10: [L161] Explain on the cooperation between $\pi_u$ and $\pi_b$. Why not still randomly sample $\beta_{s}$ during online fine-tuning? Thank you for the feedback. The intuitions behind the interplay between the universal model $\pi_u$ and the balance model $\pi_b$ are as follows: 1. **Offline Pre-training Phase**: At this stage, due to the absence of online feedback, it's uncertain which balance coefficient $\beta_{\mathbf{s}}$ results in an optimal policy for any given state $\mathbf{s}$. Consequently, **$\pi_u$ is exposed to random $\beta_{\mathbf{s}}$ values, enabling it to learn from a diverse range of policies**. On the other hand, the balance model $\pi_b$ is trained by maximizing the Q-value, but the Q-value might not always be accurate, particularly for unseen areas. As such, during offline pre-training, **it's not feasible for $\pi_b$ to pinpoint the ideal balance coefficients for $\pi_u$.** 2. **Online Fine-tuning Phase**: With the advantage of online feedback, the Q-value refines, enhancing the reliability of $\pi_b$. Allowing $\pi_b$ to ascertain $\beta_{\mathbf{s}}$ for $\pi_u$ not only **benefits from $\pi_b$'s improved performance** but also **compels $\pi_u$ to emphasize the candidate policies frequently opted by $\pi_b$**. > Q11: [L159] How many $\beta_{\mathbf{s}}$ vectors are required to learn $\pi_u$? How does this number scale with the number of states in the dataset? And how to select/design the balance coefficient space $\mathcal{B}$? Thank you for your question concerning the number of $\beta_{\mathbf{s}}$ vectors and the selection of the balance coefficient space $\mathcal{B}$. In our paper, $\beta_{\mathbf{s}}$ is actually a scalar, sampled from $\mathcal{B} = [\beta_{\text{low}},\beta_{\text{high}}]$, thus **the number of $\beta_{\mathbf{s}}$ is infinite**. As for the selection of $\mathcal{B}$, we have discussed this in lines L171-172, Appendix E.2 and F.2. Briefly, our method's performance is insensitive to the choice of $\beta_{\text{low}}$ and $\beta_{\text{high}}$, provided we refrain from using extremely radical values in $\mathcal{B}$. More details can be found in the aforementioned parts of the paper. Pdf: /pdf/402a6e4a300ed0452889f32cbfcc8b343c8ddab8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization
Accept (spotlight)
Summary: The paper discusses reparametrizations of parameter spaces and the implied transformation rules for quantities like gradients, Hessians or probability densities. Parameter spaces are interpreted as Riemannian manifolds $M=\mathbb{R}^d$ and the quantities of interest are coordinate independent geometric objects like tangent vectors, covectors, tensors, or volume forms. The transformation laws of such objects are well known in differential geometry, however, as the authors argue, they are in deep learning applications often disregarded. In particular, the metric tensor on $M=\mathbb{R}^d$ is often taken to be $G=\mathbb{I}$ and therefore dropped from the equations. Its transformation when changing coordinates is then naively forgotten, that is, one uses again metric coefficients $\hat{G}=\mathbb{I}$ in the new coordinates, which correspond geometrically to a _different Riemannian geometry_, such that the algorithms become coordinate dependent. The contribution of the paper is to point out these disregarded transformation rules and to discuss how the quantities should actually transform. Section 2 introduces the mathematical setting and explains transformation rules of geometric quantities, giving in particular examples of how their transformation laws are consistent by canceling out in different tensor contractions. The third section discusses more specific quantities of interest in deep learning. Firstly, it considers the Hessian matrix determinant as measure of flatness of parameter landscapes. The authors argue that this measure is not well-suited since it depends on the choice of coordinates. They propose to use the determinant of $G^{-1}H$ instead, which is invariant under reparametrizations. Secondly, it considers loss gradients $\mathrm{grad}\mathcal{L} := G^{-1}\nabla\mathcal{L}$. On $M=\mathbb{R}^d$, the trivial metric $G=\mathbb{I}$ is usually dropped, which leads again to inconsistent transformations of the gradient. Lastly, Section 3.3 discusses probability density functions (pdfs). When being expressed relative to the Lebesgue measure on $M=\mathbb{R}^d$, the pdfs transform with the well known Jacobian determinant factor. However, as the density may be stretched out or condensed in this procedure, the densities' mode may not be preserved. It is hence more suitable to express the density relative to the Riemannian volume form: as this form transforms itself already with the Jacobian determinant factor, the density relative to it remains invariant, which preserves in particular its modes. After discussing related work in section 4, the fifth section considers some applications, arguing in particular that NTK and standard parametrizations or neural networks are not just reparametrizations, but geometrically truly different models (section 5.1) and that the Laplace marginal likelihood is invariant under reparametrization (section 5.2). It investigates furthermore the effect of using the Hessian or $G^{-1}H$ in (preconditioned) optimizers like ADAM. Strengths: The authors observe that many of the mathematical formulations used in deep learning are from a geometric perspective not coordinate independent, which is a fundamental feature that any consistent mathematical theory should satisfy. Its main contribution lies in pointing out this issue and discussing how the equations can be fixed. While this contribution is not "original" in the sense that it would be a novel idea, making researchers aware of it is of utmost significance since a coordinate independent formulation of algorithms is a fundamental requirement. It is hard to judge how clear the paper is for someone without background in differential geometry, but it is kept simple and is certainly easy to reed for someone knowing about differential geometry. Weaknesses: A main weakness of the paper is that the mathematical formulation could be more precise at some points. I would usually not be too strict in deep learning, however, as the paper sets out to fix the inconsistent use of mathematics, it should be more precise. These weaknesses should be easy to fix - more details follow in the next paragraphs. Firstly, the considered coordinate charts and hence reparametrizations are _global_ homeomorphisms. This is in principle possible, but excludes practically relevant choices like polar or spherical coordinates, which are not global homeomorphisms. Polar coordinates are, in fact, used as an example right after saying that charts should be global homeomorphisms. The only clean way around this issue (and to include polar coordinates) would be to admit the usual atlases of _local_ charts and study reparametrizations as usual on the intersections of charts. Specifically, there should be charts $\theta: U^\theta \to \theta(U^\theta) \subseteq \mathbb{R}^d$ and $\psi: U^\psi \to \psi(U^\psi) \subseteq \mathbb{R}^d$ on domains $U^\theta\subseteq M=\mathbb{R}^d$ and $U^\psi\subseteq M=\mathbb{R}^d$ with transition maps $\varphi: \theta(U^\theta\cap U^\psi) \to \psi(U^\theta\cap U^\psi)$. Note that the main results of the paper will still hold in this more general setting. Secondly, the paper keeps talking about a dubious concept of "_equivariance under reparametrization_", which is non-standard and, while looking somewhat similar to the usual concept of group equivariance, is different from it and more confusing than it is enlightening. It is introduced at the end of the second section, assuming parameter space reparametrizations $\varphi:\Theta\to\Psi$ and functions which simultaneously seem to satisfy $F: \Theta\to\Theta$ and $F: \Psi\to\Psi$. This would in principle require $\Theta=\Psi$, while actually only $\Theta\cong\mathbb{R}^d\cong\Psi$ is demanded initially. Furthermore, this does not work in the clean formulation with local charts suggested above, since then $\varphi: \theta(U^\theta\cap U^\psi) \to \psi(U^\theta\cap U^\psi)$ and the upper and lower arrows would be $F^\theta: \theta(U^\theta\cap U^\psi) \to \theta(U^\theta\cap U^\psi)$ and $F^\psi: \psi(U^\theta\cap U^\psi) \to \psi(U^\theta\cap U^\psi)$, respectively. The corresponding diagram would be the usual coordinate independence transformation rule $F^\psi = \varphi\circ F^\theta\circ \varphi^{-1}$, which is _not_ an equivariance condition. It can in general also not be made to one by setting $F^\psi = F^\theta$, as this would require the equality of $\theta(U^\theta\cap U^\psi) = \psi(U^\theta\cap U^\psi)$ in the first place. The concept of equivariance is subsequently used in a rather unspecific way in the paper. Section 3.2 talks about and is titled "equivariance of gradient descent", however, this is not made precise, i.e. no equation is mentioned which follows the commutative diagram in the author's definition of equivariance. Furthermore, equivariance is a property of _functions_, and it is not entirely clear to me how one should interpret the gradient descent algorithm as such. Section 3.3 is titled "equivariance of probability densities", which seems to refer to equation 4, i.e. $q_\Psi^G(\psi) = q_\Theta^G(\varphi^{-1}(\psi))$. However, this is just a coordinate independence equation, since the functions $q_\Psi^G$ and $q_\Theta^G$ on the left and right hand side differ from each other (they are not the same $F$ as in the diagram). The issue of this dubious notion of "equivariance" is easily fixed by removing it from the paper and referring to it as usual as coordinate independence or covariance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The issue of a coordinate independent formulation of deep learning algorithms was previously studied in the publication "Equivariant and coordinate independent convolutional networks" by Weiler et al. (2021), which should be mentioned. I would furthermore be very interested in how their coordinate independence of feature vectors and neural network operations relates to the coordinate independence of parameter spaces in the current submission? Further minor suggestions and corrections beyond what I wrote above follow. These should be very easy to fix. Line 50 states that _"Intrinsic ... means that objects ... must be independent of ... coordinate system"_. This concept is called coordinate independence or covariance. "Intrinsic" refers instead to "not extrinsic", where extrinsic properties are geometric properties depending on an embedding of the manifold in some ambient space (e.g. sectional curvatures). Charts are defined as homeomorphisms, but the paper considers smooth manifolds. The smooth structure is actually only respected (and defined via) smooth charts, i.e. diffeomorphisms. It would probably be helpful to note that the "standard choice" of global chart in line 97 is the canonical identity map $\mathrm{id}_{\mathbb{R}^d}$. That the choice of coordinate system is not unique is somewhat tricky, as there exists the _canonical_ global chart $\mathrm{id}_{\mathbb{R}^d}$ mentioned by the authors. If the manifold was just taken as Euclidean space, i.e. with metric but without the canonical coordinates of $\mathbb{R}^d$, global charts would still not be arbitrary, but one could restrict to isometries, which are defined up to transition maps in the Euclidean group $\mathrm{E}(d)$. This raises the question why we are considering general diffeomorphisms in the first place? Is the metric just an arbitrary choice (which should of course still be respected, requiring coordinate independence)? It would be great if the authors could discuss this point. Line 120 mentions that "Under a coordinate system, one can think of both tangent vectors and covectors as vectors in the sense of linear algebra, i.e., tuples of numbers", however, also abstract coordinate free vectors are part of linear algebra. I would just write that vectors and covectors are in coordinates represented by numerical coefficient vectors, i.e. tuples of numbers. Line 123: The metric is not only positive definite, but also symmetric. I am confused about the "surjective everywhere" in line 134. Is the problem with bijectivity not that the mapping from parameters to models is in general non-injective? It might be helpful to mention somewhere around line 193 that the $\Gamma_{ij}^k$ are called Christoffel symbols. This way the reader unfamiliar with these concepts can read up on it. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As the paper does not propose a new method but comments on the mathematical formulation of theories, limitations do not really apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your extensive review! Your write-up on the summary and strengths of our paper is completely spot on! Here we will address your major comments and questions. Minor comments and suggestions will be implemented directly in the text. See also our ["global" response](https://openreview.net/forum?id=vtLNwa6uX0&noteId=CvzNeGzNp7) for general discussion. **Precise mathematical formulation** You are right that it can indeed be more precise. Your suggestions on the usage of local-chart formulation and removing the current notion of “equivariance” & changing it into “coordinate-independence” are spot on and we will implement them in the paper. On the other hand, we are trying to make the text accessible to a broader audience in deep learning—this is our target community. Note that other reviewers, who seem to fall into this category, mentioned that our paper is accessible (**_R.ecDa_**, **_R.Poha_**) and insightful (**_R.LwSR_**). So, we will follow your suggestion by showing predominantly intuition and figures in the main text and deferring the extra mathematical details to the appendix. **Weiler et al. (2021)** They tackle the problem of “geometric deep learning”, where the manifold of interest is the input (and feature) space. So, at a high level, our work—which focuses on the parameter space—is complementary to theirs. E.g., one benefits from our work when measuring the sharpness of the loss landscape of a gauge-equivariant NN. While indeed their theory tells us how to transform parameters (i.e. convolution kernels in their case) under a gauge transformation (a group action), they are compatible with our work in the same way that the symmetry of a manifold is compatible with coordinate independence of the same manifold. **Choice of coordinate systems, is the metric arbitrary?** $\mathbb{R}^n$ with the canonical global coordinates is often the default choice for the parameter space of deep networks and only the metric is varied (e.g. in natural gradient methods and normalizing flows—the latter can be seen as a metric-learning mechanism through a non-invariant coordinate transformation acting on the canonical coordinates). Even for geometric-focused NNs like gauge equivariant nets [1], where strong manifold assumptions are imposed in the input/feature space, to our knowledge no further manifold assumption is applied on the parameter space, other than possibly the metric (e.g. using SGD vs. ADAM during the optimization). In this sense, considering general diffeomorphisms is useful since our work then provides the coordinate-independence guarantee and preservation of the metric in broad deep-learning applications. **Surjective everywhere** The problem is that the map $\theta \mapsto f(X; \theta)$ is almost always a submersion for an overparametrized network [2, 3], and certainly not a diffeomorphism. Meanwhile, the requirement for pulling back a metric is that the map must be an immersion. (Note that (local) diffeomorphism implies immersion.) We will rephrase “surjective everywhere” into “non-injective everywhere” to make this point clearer. If you have further suggestions, we always welcome them! Thanks again for your great comments and suggestions! **References** [1] Cohen, Taco, et al. "Gauge equivariant convolutional networks and the icosahedral CNN." ICML 2019. [2] Zhang, Guodong, James Martens, and Roger B. Grosse. "Fast convergence of natural gradient descent for over-parameterized neural networks." NeurIPS 2019. [3] Karakida, Ryo, and Kazuki Osawa. "Understanding approximate Fisher information for fast convergence of natural gradient descent in wide neural networks." NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: The authors addressed the issues raised in "weaknesses" by promising to rewrite the paper accordingly. I don't have any remaining questions. Please make sure to include a discussion of the relation to Cohen et al. and Weiler et al. in the paper. Note that they also do not require symmetries of the manifold itself, but just consider "gauge symmetries" in the parametrization/coordinates of tangent spaces. Equivariance under symmetries of the manifold may be induced by this coordinate independence.
Summary: The paper shows that reparameterizations of neural networks can be understood uisng Riemannian geometry. They first show how a reparameterization of a neural network's parameters can be expressed via a Riemannian metric which then yields transformation rules that can be applied to any function on the parameters. The paper uses this as basis to show why the determinant, trace, and Eigenvalues of the loss Hessian are not invariant under reparameterization, and how applying the correct transformation rules yields reparameterization invariance. The same is shown for gradient descent and probability densities. Lastly, the paper applies this to infinite-width neural networks (showing that the NTK is not a reparameterization of a standard infinite-width Bayesian NN), to the Laplace marginal likelihood, and to preconditioned optimizers. Strengths: - Viewing reparameterizations from the perspective of Riemannian geometry brings much needed clarity to the discussion. - The paper is excellently written and insightful, a pleasure to read. - The paper discusses a wide range of implications relevant to machine learning. Weaknesses: - The discussion on flatness-based generalization measures does not address existing generalization bounds for reparameterization-invariant flatness measures (see questions). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - It might be interesting to see how a very simple reparameterization (e.g., multiplying one layer with a constant c and the next with 1/c for ReLU NNs) can be interpreted in Riemannian geometry. I.e., what is G in that case, how would the transformed Hessian look like? While this might not fit into the main text, it would make for a great practical example in the appendix for readers (like myself) not too familiar with Riemannian geometry. - The relation between flatness and generalization in light of reparameterizations has been established theoretically in [1]. - Is it possible to interpret relative flatness [1] as an invariant transformation? That is, is it possible that $G^-1(\theta) = ||\theta||^2_2$? [1] Petzka, Henning, et al. "Relative flatness and generalization." Advances in neural information processing systems 34 (2021): 18420-18432. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper is clear about the assumptions and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your positive review! Please see also our ["global" response](https://openreview.net/forum?id=vtLNwa6uX0&noteId=CvzNeGzNp7) for a general discussion. Here, we address your major comments. All other comments and suggestions are implemented directly in the paper. You are completely right that our work is positioned in such a way as to bring much-needed clarity to the behavior of neural networks’ parameter spaces under reparametrization. We are glad to hear that our work can be followed easily and gives insights to you, as a researcher outside of Riemannian geometry. **Relative flatness of Petzka et al.** We note that Petzka et al., while using the term “reparametrization” in their paper, actually tackle the “symmetry” problem discussed in Sec. 1 in our paper. Reparametrization is a specific transformation of the parameter space under a smooth invertible map (diffeomorphism)—in the language of calculus, it is essentially the change-of-variable formula, e.g. in integration by substitution. Symmetry, meanwhile is defined through group actions [1], e.g. under rescaling of the weights where not just one scaling factor $c > 0$ is considered, but all of $c \in \mathbb{R}\_{>0}$ (the space $\mathbb{R}_{>0}$ here is the group). In any case, both kinds of invariance (under symmetry and reparametrization) are important as we mentioned in our paper (see also [2], Sec. 3 and Sec. 5) since in general one does not imply the other. (See our "global" response.) They thus complement each other and our work complements Petzka et al.’s relative flatness which provides an invariant measure for generalization under rescaling group actions, but is not invariant under reparametrization. (Notice that relative flatness is defined through the non-invariant Hessian trace, and we show in our paper how to make it invariant.) In other words, our work makes relative flatness invariant to _both_ rescaling group actions and reparametrization. **Interpretation of as an invariant transformation** The symmetry $f(c \theta) = f(\theta)$ can be written as a group action of the group $\mathcal{G} := \mathbb{R}_{>0}$ on the manifold $\Theta := \mathbb{R}^n$ by multiplication. Your intuition is spot on that $\mathcal{G}$-invariant quantities such as relative flatness can be seen as quantities in the "symmetry-free space". In the above case, one can think of relative flatness as a generalization metric on the quotient space $\Theta / \mathcal{G}$, which happens to be the sphere $\mathbb{S}^{n-1} = \\{ \theta \in \Theta : \Vert \theta \Vert^2_2 = 1 \\}$. **Example with simple reparametrization** As we have discussed before, the rescaling "reparametrization" is better described as "symmetry" and thus is not suited as an example in this paper. But, we are happy to give a simple step-by-step example of reparametrization and its effect on the metric, Hessian, etc. in the appendix. We will do so by expanding Example 1a, i.e., using the transformation $\theta = \log \psi$. [Our answer to **_R.ecDa_**](https://openreview.net/forum?id=vtLNwa6uX0&noteId=1K4jej38ao) might also be of interest to you. Thanks again and please let us know if you have further comments! **References** [1] Kunin, Daniel, et al. "Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics." ICLR, 2020 [2] Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." ICML, 2017. --- Rebuttal Comment 1.1: Title: Answer to authors Comment: I thank the authors for their reply. Both the reply and the other reviews keep me convinced that this is a good paper. I keep my score.
Summary: This work analyzes the invariances and non-invariances of model reparameterization in machine learning. The authors show that, if we account for Riemannian metrics in parameter spaces, then many quantities thought to be not invariant are in fact invariant to reparameterization. Thus, by properly applying transformation rules on geometric quantities, we can obtain equivariant or invariant functions on parameter space. Strengths: 1. Covers applications of this type of thinking in several areas of machine learning. 2. Good, careful exposition of geometric concepts and calculations. 3. The note on the utility of non-invariant reparameterization for normalizing flows and optimization is interesting. 4. Overall, this work gives a useful perspective that helps analyze ML models (e.g. Section 5.2), and will hopefully give actionable insights to improve them (e.g. other metrics instead of Fisher). Weaknesses: 1. I am not sure about the utility of the suggested method of measuring sharpness, and I would appreciate if the authors could comment on this. Indeed, the sharpness of ReLU networks depend on the scale of the weights chosen. However, [Du et al. 2018] shows that there is some implicit bias so that arbitrary scales of the weights are not converged to by GD in practice. Also, there are generalization results in terms of (I believe) non-invariant Hessian trace [Ding et al. 2023]. [Du et al. 2018] Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced. NeurIPS 2018 [Ding et al. 2023] Flat minima generalize for low-rank matrix recovery. arXiv 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: n/a Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussion of limitations on Page 2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review! Please see our ["global" response](https://openreview.net/forum?id=vtLNwa6uX0&noteId=CvzNeGzNp7) for a general discussion. Here, we address your specific comments. **Du et al.** They focus on the invariance of ReLU networks under the scaling symmetry, while we are tackling the problem of invariance under reparametrization. This difference is also made clear by Dinh et al. 2015, Sec. 3 (symmetry) vs. Sec. 5 (reparametrization). See also [our answers to **_R.ecDA_**](https://openreview.net/forum?id=vtLNwa6uX0&noteId=1K4jej38ao) and our "global" response. **Ding et al.** Indeed they showed generalization results with the non-invariant Hessian trace. However, their results will have the pathologies that we discussed in our paper and in Sec. 5 of Dinh et al. Our work is compatible with them in that it provides a guardrail for their work: we give the necessary steps to make their results resistant to pathologies under reparametrization. In any case, we added both works to the related work section of our paper. Please let us know if you have further comments or questions! **References** [1] Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." ICML, 2017. --- Rebuttal Comment 1.1: Comment: We thank the author for their rebuttal. I think certain readers would appreciate the addition of discussion for these two papers. I have no further questions.
Summary: Under model reparametrization, Hessian-based flatness measures, optimization trajectories, and probability densities are not invariant. Motivate by these inconsistencies, this paper studies the invariance associated with the reparametrization of neural networks. By viewing the parameter space as a Riemannian manifold, the authors show that the invariance and equivariance under reparametrization is preserved by explicitly including the metric when computing geometric objects such as the Hessian. The authors point out that acknowledging the metric helps in measuring the flatness of minima, optimization, and probability-density maximization. Strengths: This paper draws attention to the nature of reparametrization through Riemannian geometry. By introducing a framework that transforms representations of geometric objects to keep them invariant under reparametrizations, the paper provides a useful tool in comparing properties of neural networks after reparametrization. I appreciate the authors’ effort to make the derivations both mathematically rigorous and accessible. In particular, since the parameter space is usually Euclidean, it makes sense to present most of the material in linear algebra terminologies instead of the more general Riemannian geometry. Weaknesses: The novelty of this paper seems limited. As the authors also mention, the lack of invariance under reparametrization has been observed before. The transformation of various quantities in reparametrization has also been discussed (see below). The discoveries on the application side are also not well-presented. As a result, it is not clear what the main contributions are. Some parts of the paper could be explained in more details. (a) The goal of section 5.1 is not clear. Is the goal to show that SP and NPT are different because they cannot be seen as reparametrizations of each other? (b) The significance of section 5.2 might be clearer if the authors could add a sentence to give a general intuition for Equation 5. The transformation of the Hessian and gradient under reparametrization has been discussed in a previous paper that is not cited [1]. Could the authors comment on how their approach in section 3 is different? [1] Kunin, Daniel, et al. "Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics." International Conference on Learning Representations. 2020. (Appendix A) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - By comparing the definitions on page 1, it seems that invariance under symmetry is a special case of invariance under reparametrization. Can the reparametrization studied in this paper be viewed as more general than previous works on weight-space symmetries? - Could the authors elaborate on the intuition on why we should explicitly include the metric when comparing the sharpness of the solution found by different optimizers, as suggested in section 5.3? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors state limitations at the end of the introduction section. There are no potential negative societal impacts of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback! Here we address your major comments/questions. We incorporated your suggestions into the text directly. Please see also our ["global" response](https://openreview.net/forum?id=vtLNwa6uX0&noteId=CvzNeGzNp7) for a general discussion. **Novelty** The main contribution of our present work is to show that invariance under reparametrization of many quantities relevant to neural nets is natural when considering the correct transformation of geometric objects. While indeed previous work has discussed non-invariance under reparametrization, they either accept it as a fact (e.g. Sec. 5 of [1]) or come up with a special metric/method to overcome this issue (e.g. [2]). Our argument, meanwhile, is broadly applicable since no assumption about the metric is imposed. Our work thus provides a guardrail against past & future confusions regarding invariance. We hope to invoke more interest in this topic by providing some example applications in Sec. 5. Finally, please note that other reviewers mentioned that our work provides a new perspective (**_R.Poha_**), brings much-needed clarity (**_R.LwSr_**), and is useful since making ML researchers aware of the discussed topic is of utmost significance (**_R. togv_**). **Sec. 5.1** Our goal is to clear up confusion about the term “parametrization” in SP and NTP. In the preceding sections we have argued that if two parametrizations are connected by a diffeomorphism, they represent a single function. But since SP and NTP, despite their monikers, are not reparametrization of each other, then they represent two different functions. This clears up the confusion why they have different limiting behavior (in terms of the NTK and NNGP kernels) and calls for a more suitable way of analyzing them. **Reparametrization vs symmetry** When one fixes the bijective transformation $T$, and it happens to be smooth, then indeed they can be seen as the same, e.g. in the normalized NN example, the scaling $c$ is fixed. However, in general $T$ is not fixed, i.e. we consider instead a _family_ of invertible $\Theta \to \Theta$ and so they are inherently different. Invariance under symmetries is better expressed in terms of group actions from (Lie) group theory, i.e. the map $T$ should instead be defined as $T: \mathcal{G} \times \Theta \to \Theta$ where $\mathcal{G}$ is a group. For normalized NNs, $\mathcal{G} = \mathbb{R}_{>0}$, i.e. we take into account _all_ scaling factors $c > 0$ in $\mathcal{G}$ instead of fixing it. The definition of symmetry on page 1 is indeed incomplete (missing the group term) and we have updated it. Please note that this missing term does not affect the discussion since we focus on reparametrization, not symmetry. Note also that the group-action definition is standard in the literature, including in Kunin et al. **Kunin et al.** They analyze the gradient and the Hessian under symmetry, not reparametrization (see above). Their work therefore tackles a different problem than ours. In particular they study invariance under group actions, while we study invariance under diffeomorphism. Both are important (as written in our paper and in e.g. [1] Sec. 3 and Sec. 5) and our work is complementary to theirs—our geometric insights further enhance their symmetry-invariant gradient and Hessian formulations with reparametrization invariance. We added their work to the citation list. Thanks for pointing out their work! **Why include the metric in sharpness** There are at least two reasons why. _First_, as we have shown in Sec. 2.2.1 and Sec. 3.1, this is the geometrically principled way to compute Hessian-based sharpness (trace, det, eigenvalues), and this naturally yields invariance under reparametrization and solves the problem shown by [1, Sec. 5]. _Second_, when using a preconditioned gradient descent (PGD), the metric-infused sharpness yields similar behaviors as standard gradient descent [3], allowing for direct comparisons. Indeed, PGD is essentially just a GD acting on a space with different geometry induced by the metric. Moreover, by taking into account the parameter-space metric, this reveals that the loss landscape geometry under ADAM’s metric is actually much sharper than that assumed in GD (our Fig. 5)---this information might be useful for future work. [Our answer to **_R.LwSr_**](https://openreview.net/forum?id=vtLNwa6uX0&noteId=OHnkKsN22C) regarding symmetry vs reparametrization might also interest you. Please let us know if you have further comments/questions/suggestions! **References** [1] Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." ICML, 2017. [2] Jang, Cheongjae, et al. "A reparametrization-invariant sharpness measure based on information geometry." NeurIPS, 2022. [3] Cohen, Jeremy M., et al. "Adaptive gradient methods at the edge of stability." arXiv preprint arXiv:2207.14484 (2022). --- Rebuttal Comment 1.1: Comment: Thank you for the response. I now have a better understanding of the significance of invariance under reparametrization. I also appreciate the clarification on the difference between invariance under reparametrization and symmetry. I have increased my score accordingly.
Rebuttal 1: Rebuttal: **To all reviewers:** Thank you very much for your input! To supplement the responses to your individual reviews, here we would like to address the common questions and comments. Our work focuses on addressing _invariance under reparametrization_, i.e. under change of variable from the point of view of calculus or the concept of coordinate independence from the point of view of differential geometry. In particular, it studies the invariance of quantities like gradients and Hessians when the coordinates of the parameter space are mapped into _new_ coordinates. This is different from _invariance under symmetry_, where the invariance is studied under a _group_ acting on a _fixed_ choice of coordinates of the parameter space. Crucially, these two concepts are compatible with each other. Indeed, from the differential geometry point of view, coordinate independence is an _inherent_ property of a manifold as **_R.togv_** also pointed out; symmetry is a property that _can be_ studied further on that manifold—every manifold is coordinate-independent, but not every manifold has symmetry. Thus, our work complements previous works and provides a guardrail for future works that focus on studying invariance under symmetry. For example, our work enhances the works of Kunin et al. [1], Petzka et al. [2], and Cohen et al. [3, 4], where they addressed invariances under various symmetries but are still susceptible to pathologies under reparametrization. (See our individual responses for more details.) Please note also that even though these previous works often used the term "reparametrization", their problem is better termed as "symmetry", as per the differential-geometric definitions we use. **References** [1] Kunin, Daniel, et al. "Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics." ICLR, 2020 [2] Petzka, Henning, et al. "Relative flatness and generalization." NeurIPS, 2021. [3] Cohen, Taco, et al. "Gauge equivariant convolutional networks and the icosahedral CNN." ICML 2019. [4] Weiler, Maurice, et al. "Coordinate Independent Convolutional Networks--Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds." arXiv preprint arXiv:2106.06020 (2021).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Revisiting the Minimalist Approach to Offline Reinforcement Learning
Accept (poster)
Summary: This paper proposes an offline RL algorithm based on TD3+BC and BRAC, integrating popular design elements. The proposed method achieves higher scores on D4RL and V-D4RL datasets. Strengths: The idea of exploring the popular design elements in offline RL algorithms is interesting. The authors make a great effort in conducting confirmatory experiments including various ablation studies. Weaknesses: Overall, I see the proposed method as an integrated method of existing algorithms. From my perspective, the novelty and contributions are weak. It may be more insightful if the author put more emphasis on the analyses of existing design elements rather than combining them into one new method. I do not see many insightful analyses in the current manuscript. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Table 6: ReBRAC w large batch. What does it mean? 2. The performance improvements of ReBRAC seem marginal, especially in challenging tasks (door-human, door-cloned….) 3. Table 6: it seems that the critic penalty is not crucial. Is it a universal conclusion, or just due to a lack of hyperparameter tuning? 4. Can authors provide comparisons of computation costs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We believe that the main concern is the limited novelty, which we address first, and then we move to more precise limitations. ---------- ### On limited novelty While we agree that the paper may seem to feature constrained technical novelty, given it doesn't propose new algorithmic techniques but rather builds upon existing ones, we see merit in studies that clarify determinants of performance, be they algorithmic elements, coding strategies or hyperparameter selections. Such work can, in its own right, offer retrospective novelty. This invites us to ponder over some questions: Were we aware that this unpretentious combination of design choices could eclipse the efficiency of all other methods to this extent? Did we know prior to our study that this kind of method could perform exceptionally well in environments requiring strong stitching competencies (AntMaze, as per our understanding, has generally seen dominance from methods like IQL or CQL with ensembles (MSG))? Can we consider such seminal works as Rainbow [1] or Dreamer-v3 [2] with limited novelty, merely because they integrate known elements? We hope this dialogue serves to highlight the scope, relevance, and intrinsic merits of our study. As for the analysis of extant design elements, we have taken measures to reference a wealth of existing literature in our description, thereby providing a comprehensive overview. For instance, we pointed out how LayerNorm is instrumental in mitigating catastrophic q-value extrapolations, and that large batches can enhance and balance convergence given adequate data. Even though a more thorough analysis of the precise volume of data needed could constitute a separate study, we admit that it was beyond our current work's scope. The same goes for the increased discount factor that could use more exploration but was not the focus of our paper. We hope that our response brings more clarity and understanding to the novelty and significance of our research. We look forward to any further comments that could help us improve and refine our work. -------------------- ### On Questions > Table 6: ReBRAC w large batch. What does it mean? ReBRAC with the use of large batch (the definition of the large batch can be found in the paper, essentially, the value is fixed to the 1024). We found the use of large batches to be detrimental on AntMaze datasets. > The performance improvements of ReBRAC seem marginal, especially in challenging tasks (door-human, door-cloned….) We highly value your feedback concerning the perceived marginal performance improvements of ReBRAC, especially in challenging tasks. (1) Although it may seem that the performance gains of our method over SAC-RND on Gym-MuJoCo are minimal, it is essential to note that ReBRAC achieves this level of efficiency sans the requirement for an additional network. Moreover, it surpasses other contenders by a minimum of 10%. (2) With regard to the AntMaze domain, ReBRAC outperforms the nearest competitor by a margin of 26%, which we believe demonstrates notable superiority. (3) In the Adroit domain, ReBRAC exhibits an enhanced performance level, outpacing the closest competitor by an average of 9%. (4) Furthermore, in the V-D4RL test, ReBRAC once again outperforms the nearest competitor, this time by a margin of 16%. While we acknowledge that we were not able to set a new benchmark in the most challenging tasks in the offline setting, we conducted additional experiments in the offline-to-online setting during the rebuttal phase. These tests revealed that our method exhibits superior performance in the Adroit domain and aligns with Cal-QL in the AntMaze domain. We invite you to refer to the attached .pdf for additional details. > Can authors provide comparisons of computation costs? Extensive computational costs comparisons can be found in the appendix. Thank you for noticing, we will add a reference to it in the main text. > Table 6: it seems that the critic penalty is not crucial. Is it a universal conclusion, or just due to a lack of hyperparameter tuning? We are not sure what you mean by the lack of hyperparameter tuning in this specific case, could you elaborate on that? --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thanks for the authors' responses. Some of my concerns are addressed. However, in my view, it may be more significant to provide insightful analyses of these added components than merely to present them. I will appreciate it if the authors briefly discuss these components and provide some insights regarding why they work or why they do not work. Additionally, I have two questions: (1) It is noticed that you tune hyperparameters per dataset for each method. I’m a little concerned that the tuning effort you made for the proposed method is larger than the baseline methods. In addition, I do not think the per-dataset tuning is affordable in real-world scenarios. Hence, can the authors provide the tuning logs for baseline methods, or provide the comparing results with unifying hyperparameters? (2) Regarding the online finetuning experiment in the attached pdf: why adroit-human tasks are missing? And can ReBRAC outperform [1]? [1] Ball, Philip J., et al. "Efficient online reinforcement learning with offline data." arXiv preprint arXiv:2302.02948 (2023). --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and questions. To address your first concern about the analyses of added components, we understand and appreciate the emphasis on deeper insight. The depth and nuances of each component certainly warrant dedicated attention. For example, reference [2] delves into the intricacies of large batches in offline RL, while another study [3] examines the dynamics of discount factor value adjustments. The primary goal of our paper was to integrate these insights with the minimalist offline RL framework, represented by TD3 + BC, to demonstrate its potential for state-of-the-art results across varied datasets. We believe the ablation studies, which highlight the individual contributions of each component, provide a valuable perspective on this. Finally, we acknowledge your reference to a concurrent work [1] in question (2), which, to our knowledge, embarks on analogous undertakings in a different setup. Regarding your specific questions: > (1) It is noticed that you tune hyperparameters per dataset for each method. I’m a little concerned that the tuning effort you made for the proposed method is larger than the baseline methods. > The choice to tune hyperparameters per dataset is consistent with emerging practices in offline RL, exemplified by MSG [4] (8 - 12 parameters sets) and SAC-RND (9 - 12 parameters sets). We ensured that our tuning efforts for ReBRAC were comparable to that for the baselines. Specifically, the grid dimensions for both ReBRAC and IQL are similar (16 - 20 parameters sets), with ReBRAC's performance proving superior. To ensure transparency and provide further context on the influence of hyperparameter tuning, we have shared our results here: **https://openreview.net/forum?id=vqGWslLeEw&noteId=TM1rs0kWh4**. Furthermore, optimal hyperparameters are detailed in Appendix B within the Supplementary Material. > In addition, I do not think the per-dataset tuning is affordable in real-world scenarios. > We recognize the real-world challenges of per-dataset tuning. However, it's worth noting that different domains often necessitate distinct hyperparameters. Our work emphasizes the practical utility of ReBRAC, as reflected in the Expected Online Performance (EOP) scores detailed in Table 7. The EOP metric, in particular, provides a nuanced understanding of how performance can vary based on the number of testable policies. As illustrated, ReBRAC exhibits promising results when compared to notable baselines such as IQL, especially when policy budget constraints are considered. > (2) Regarding the online finetuning experiment in the attached pdf: why adroit-human tasks are missing? And can ReBRAC outperform [1]? > The choice to use cloned datasets was primarily driven by the availability of reference scores in CORL. Given the inherent similarities between cloned datasets and their human counterparts, we expect only minor performance differences. Regarding the comparison with [1], we see potential in ReBRAC based on its performance relative to Cal-QL, which has been documented to match or surpass [1]. The absence of direct comparisons is due to the lack of numerical results from [1]. We trust that our clarifications address your concerns and shed light on the novelty and significance of our work. Thank you for your time and attention. [1] Ball, Philip J., et al. "Efficient online reinforcement learning with offline data." arXiv preprint arXiv:2302.02948 (2023). [2] Nikulin, Alexander, et al. "Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size." arXiv preprint arXiv:2211.11092 (2022). [3] Hu, Hao, et al. "On the role of discount factor in offline reinforcement learning." International Conference on Machine Learning. PMLR, 2022. [4] Ghasemipour, Kamyar, Shixiang Shane Gu, and Ofir Nachum. "Why so pessimistic? estimating uncertainties for offline rl through ensembles, and why their independence matters." Advances in Neural Information Processing Systems 35 (2022): 18267-18281.
Summary: This paper revisited recent methods in the offline RL area and discussed how different design choice impact offline RL methods' performance. In particular, the authors focus on four hyper-parameter choices (i) number of network layers, (ii) using LayerNorm, (iii) batch size and (iv) discounting factor $\gamma$, as well as one algorithmic choice (v) using actor/critic regularization, and conducted comprehensive experiments to show the importance of different factors. Their method, ReBRAC, achieved strong empirical performance on 51 datasets. Strengths: 1. The paper presents comprehensive experiments. 2. The proposed method demonstrates strong empirical performance in the conducted experiments. 3. The paper is clearly written. 4. This work shows great engineering values and has the potential to offer valuable guidance in design choices for the offline RL community. Weaknesses: 1. A combination of existing techniques/tricks. In particular, most of the design choices (i) number of network layers, (ii) using LayerNorm, (iii) batch size and (iv) tuning $\gamma$ are hyper-parameters or standard deep NN techniques. 2. Table 6: ReBRAC w/o LN has a 38.0 average score over all domains, which is noticeably lower than the authors' reproduction of TD3+BC (52.2), leading to my speculation that adding LN alone could achieve great empirical performance. It would be nice if TD3+BC w/ LN alone could be shown. (For reference, Rainbow [1] shows the performance of all individual choices in their Figure 1 and none of them were close to Rainbow.) 3. Table 6: Critic penalty only gives marginal improvement. As the batch size, using LN, number of layers, and tuning $\gamma$ are mostly hyper-parameters rather than RL algorithm designs, marginal benefits from the critic penalty further limit its algorithmic contribution. 4. According to table 1, SAC-RND seems to have all five components. It would be nice if the authors could elaborate on the difference between SAC-RND and ReBRAC, for example, in section 2 or 3. Overall I find this paper has limited novelty, especially considering some components mentioned are merely hyper-parameters. However, I find it might be a significant contribution to achieve superior performance on 51 datasets, providing potential implementation guidance to the offline RL community. I therefore voted for a boarderline acceptance with a low confidence score. [1] Rainbow: Combining Improvements in Deep Reinforcement Learning. https://arxiv.org/pdf/1710.02298.pdf Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no further question as it was clearly written. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, we would like to thank you for your time reviewing our paper and valuable comments. We believe that the main concern is the limited novelty, which we address first, and then we move to more precise limitations. ------------ ### On limited novelty We appreciate the reviewer's comments and accept that while our research represents a judicious amalgamation of existing techniques and hyperparameters, the novelty lies in the distinctive combination and application. Indeed, insights into performance enhancement – whether through algorithmic or code tweaks or hyperparameter tuning – carry intrinsic merit. Our work served to highlight certain empirical results that may otherwise have remained undisclosed. For instance, the fact that our chosen blend of design choices significantly outperforms other methods is noteworthy. Moreover, it was hitherto unknown that this category of methods performs exceptionally well in challenging environments, such as AntMaze. Previously, the prevalent belief favoured the efficacy of methods like IQL or CQL with ensembles (MSG). Hence, if we scrutinize papers like Rainbow [1] or Dreamer-v3 [2], which are clever syntheses of known elements, should we consider their innovation limited? We hope that this perspective will elucidate our paper's aim and its inherent merits. We have also augmented our submission with offline-to-online experiments (refer to the .pdf file). These reveal ReBRAC to be a potent baseline in this context, further expanding our contribution's dimensions. Therefore despite its perceived limited novelty, we propose that our paper provides essential guidance for offline RL implementation, thereby contributing significantly to this field of research. --------------- ### On other outlined weaknesses > Table 6: ReBRAC w/o LN has a 38.0 average score over all domains, which is noticeably lower than the authors' reproduction of TD3+BC (52.2), leading to my speculation that adding LN alone could achieve great empirical performance. It would be nice if TD3+BC w/ LN alone could be shown. (For reference, Rainbow [1] shows the performance of all individual choices in their Figure 1 and none of them were close to Rainbow.) This is a great observation and suggestion. To address it, we include Rainbow-like experiments, i.e., adding each individual choice separately to the TD3+BC (see the attached .pdf file). Notably, LN and deeper networks improve performance of the TD3+BC but still lack far behind the ReBRAC. > According to table 1, SAC-RND seems to have all five components. It would be nice if the authors could elaborate on the difference between SAC-RND and ReBRAC, for example, in section 2 or 3. Sure, we will add elaboration on the difference in the final version. The main difference is that SAC-RND uses penalization based on the RND bonus in both actor and critic, which requires pre-training a predictor network. -------------- ### References [1] Hessel, Matteo, et al. "Rainbow: Combining improvements in deep reinforcement learning." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. [2] Hafner, D., Pasukonis, J., Ba, J., & Lillicrap, T. (2023). Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for the responses and additional experiments. > we propose that our paper provides essential guidance for offline RL implementation, thereby contributing significantly to this field of research. The implementation guidance to the offline RL community was the reason for my original positive vote. > we include Rainbow-like experiments Thanks for the experiments, my concern regarding this point has been addressed. I'll raise my score to 6.
Summary: This paper revisits several minor design choices in recent offline RL literature and equips a standard method TD3+BC (or more generally BRAC) with these designs to attain a strong baseline for offline RL with state-of-the-art performance on both D4RL and V-D4RL benchmarks. These critical designs include deeper networks, LayerNorm, larger batches, decoupled actor-critic penalty, and adjusted discount factor. Extensive benchmarking experiments and ablation studies are conducted across a range of domains. Strengths: 1. Careful look into detailed design choices from a large amount of literature, which naturally motivates the proposed solution 2. Extensive experiments across D4RL’s Gym-MuJoCo, AntMaze, and Adroit tasks, and even vision-based V-D4RL tasks 3. Great efforts for a fair comparison with baseline methods, including hyperparameter tuning for baselines and measuring Expected Online Performance 4. Well written Overall, I really appreciate the contribution to the offline RL community by revisiting a strong yet overlooked baseline method, BRAC. Weaknesses: 1. A probably incorrect assertion to a technical detail of BRAC (see Question 1 below) 2. Limited technical novelty (but I think it doesn't matters for this paper focusing on a retrospective analysis) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. If I understand correctly, BRAC has adopted actor penalization and critic penalization at the same time. Indeed, BRAC-v in the original paper adds a penalty to both actor and critic learning objectives (Eq. 6 and 7 in the BRAC paper). Thus, some assertions in this paper (Lines 65 and 111) are incorrect and should be revised. Nevertheless, the idea of decoupling actor and critic penalty has not been explored to my knowledge. 2. Following the above question, an ablation study comparing decoupled actor-critic penalties and coupled ones (using only one hyperparameter) is missing in Table 6. Can authors present these results? 3. What do you mean by "Since our approach principally builds upon TD3+BC, the differences in their performances should be considered the most important ones." in line 124? 4. In line 185, the authors claim, "ReBRAC outperforms TD3+BC not because of different implementations or actor regularization parameter choice". However, in my opinion, the differences between ReBRAC and TD3+BC are indeed implementation choices rather than algorithmic innovations. I understand that the efforts of building ReBRAC upon TD3+BC are non-trivial, but the authors should consider revising this sentence to make it much clearer. 5. Implementation details for vision-based V-D4RL tasks, including CNN architecture to encode visual observations, are missing in both the main paper and the appendix. Although I have found the implementations in supplementary source code, I recommend clarifying these details in a future revision of the paper. 6. TD3+BC uses state normalization on Gym-MuJoCo tasks. Did the authors include this implementation detail in their experiments for both TD3+BC and ReBRAC? Since in the TD3+BC paper, state normalization does not impact significantly, in my opinion, it is okay not to include it, but I recommend the authors clarify this in the paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This work has discussed its limitations and future work in the conclusion. There does not seem to be any negative social impact of this paper that should be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank your for the review and identified weaknesses, we address them and your questions as follows: > If I understand correctly, BRAC has adopted actor penalization and critic penalization at the same time. Indeed, BRAC-v in the original paper adds a penalty to both actor and critic learning objectives (Eq. 6 and 7 in the BRAC paper). Thus, some assertions in this paper (Lines 65 and 111) are incorrect and should be revised. Nevertheless, the idea of decoupling actor and critic penalty has not been explored to my knowledge. You're correct in your assessment, we will revise our statements in the final version of the paper. > Following the above question, an ablation study comparing decoupled actor-critic penalties and coupled ones (using only one hyperparameter) is missing in Table 6. Can authors present these results? Yes, please, see the attached .pdf file in the official comment. Notably, performance with coupled penalties drops by a small margin when compared to decoupled ones. We will make sure to include this information in the appendix, as this significantly reduces the hyperparameter space search and should improve the expected online performance on smaller budgets when compared to other methods. > What do you mean by "Since our approach principally builds upon TD3+BC, the differences in their performances should be considered the most important ones." in line 124? This was meant to highlight that ReBRAC improvements do not simply come from the actor-regularization hyperparameter search for TD3+BC. Hopefully, this clarifies what we meant, we will make sure to update the corresponding text. Moreover, we also included results for experiments where we add one design choice at a time to the TD3+BC to further demonstrate the preference of the found combination of choices. > Implementation details for vision-based V-D4RL tasks, including CNN architecture to encode visual observations, are missing in both the main paper and the appendix. Although I have found the implementations in supplementary source code, I recommend clarifying these details in a future revision of the paper. Sure, we will provide a clear reference to the original architecture appearence and will elaborate on it in the appendix. > TD3+BC uses state normalization on Gym-MuJoCo tasks. Did the authors include this implementation detail in their experiments for both TD3+BC and ReBRAC? Since in the TD3+BC paper, state normalization does not impact significantly, in my opinion, it is okay not to include it, but I recommend the authors clarify this in the paper. As you rightfully noted, state normalization does not impact performance much (this is what we also observed in the preliminary experiments), therefore we decided not to use it -- keeping in mind that this may complicate further application in the offline-to-online setting which we did not cover in the original submission. To further illustrate the use, we conducted a set of experiments in the offline-to-online setting and found ReBRAC to be a strong competitior, especially on the Adroit domain (for results, again, see the attached .pdf). --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. Most of my issues are solved. I will maintain my score since I did not have a major concern with the paper. I really appreciate the effort made for the paper and wish the authors good luck with the submission.
Summary: In this work, a number of recent advancements are added to the minimalist TD3+BC baseline, and the authors found that the resulting algorithm leads to a new SOTA performance on D4RL benchmark with raw state and visual input. Extensive empirical results and ablations are provided and the authors show a dedication to fairness and reliability of their comparisons. Strengths: **originality** - the paper studies existing methods, and the novelty of the work is main on its empirical findings and ablations. However, due to the extensive experiments and attention to fairness and details, these results can be considered novel findings. **quality** - quality is great, paper is well structured, important things are highlighted and performance changes are quantified. - technical details are given, things like computational speed discussed, code provided, overall good. - authors even tuned and tested on different seeds. **clarity** - paper is very clear and easy to read. **significance** - Finding a new sota baseline for offline RL setting, with a full set of ablations understanding the impact of each of its components can be a good and significant contribution. - Although no new algorithm is introduced, these results can be very helpful to researcher who want to really understand what is helping the performance (whether it's an algorithmic component or just a code hack or hyperpararmeter choice). Weaknesses: **originality** - the paper focuses on an empirical study of existing methods, which will reduce the novelty a bit. Other than this, I think within its own scope, the paper is very good. No major concern. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The authors mentioned that to be fair, the competing methods have been extensively tuned, I wonder how much performance gain you are able to achieved for these competing methods compared to their reported results in their original papers? - when you tune for d4rl, do you fine-tune a different set of hyperparamter for each task? Is this done for all algorithms in the comparisons? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Authors discussed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. Regarding your questions: > when you tune for d4rl, do you fine-tune a different set of hyperparamter for each task? Is this done for all algorithms in the comparisons? Yes, we tune hyperparameters per-dataset for each method. > The authors mentioned that to be fair, the competing methods have been extensively tuned, I wonder how much performance gain you are able to achieved for these competing methods compared to their reported results in their original papers? Here are the scores over four training seeds. The "paper" label stands for hyperparameters taken from the original papers and the "tuned" label stands for the best hyperparameters we found. As you can see below, the gains from per-dataset hyperparameter tuning are significant. This is often done for new methods and missed for the baselines considered. We believe this is expected, as different datasets probably benefit from varied levels of penalization. | **Dataset** | **IQL, paper** | **IQL, tuned** | **TD3 + BC, paper** | **TD3 + BC, tuned** | |---------------------------|----------------|----------------|---------------------|---------------------| | halfcheetah-random | 9.4 | 19.6 | 2.2 | 30.1 | | halfcheetah-medium | 48.3 | 50.2 | 44.6 | 55.4 | | halfcheetah-expert | 96.4 | 96.6 | 93.8 | 95.5 | | halfcheetah-medium-expert | 94.7 | 94.7 | 91.9 | 91.9 | | halfcheetah-medium-replay | 44.4 | 45.0 | 40.5 | 45.1 | | halfcheetah-full-replay | 74.9 | 75.6 | 69.3 | 74.1 | | hopper-random | 7.5 | 11.8 | 10.3 | 10.3 | | hopper-medium | 67.5 | 67.5 | 53.2 | 57.6 | | hopper-expert | 100.0 | 112.7 | 108.7 | 110.7 | | hopper-medium-expert | 80.7 | 112.4 | 75.8 | 106.2 | | hopper-medium-replay | 97.4 | 97.6 | 64.5 | 64.5 | | hopper-full-replay | 104.4 | 108.4 | 49.9 | 106.2 | | walker2d-random | 4.0 | 12.0 | 4.5 | 4.5 | | walker2d-medium | 80.9 | 82.5 | 77.1 | 77.1 | | walker2d-expert | 112.8 | 113.8 | 109.1 | 110.1 | | walker2d-medium-expert | 111.7 | 112.4 | 108.9 | 110.2 | | walker2d-medium-replay | 82.1 | 83.0 | 50.9 | 58.8 | | walker2d-full-replay | 97.7 | 98.2 | 86.7 | 89.4 | | **Gym-MuJoCo avg** | 73.0 | 77.4 __(+6%)__ | 63.4 | 72.0 __(+13%)__ | | antmaze-umaze | 76.0 | 84.2 | 62.0 | 62.0 | | antmaze-umaze-diverse | 59.5 | 75.0 | 48.0 | 48.0 | | antmaze-medium-play | 69.7 | 70.2 | 0.0 | 39.0 | | antmaze-medium-diverse | 63.0 | 64.7 | 0.5 | 18.5 | | antmaze-large-play | 41.5 | 41.5 | 0.0 | 0.2 | | antmaze-large-diverse | 22.5 | 29.5 | 0.5 | 0.5 | | **Antmaze avg** | 55.3 | 60.8 __(+10%)__ | 18.5 | 28.0 __(+80%)__ | | pen-human | 87.1 | 93.6 | 65.9 | 77.6 || pen-cloned | 73.2 | 89.4 | 78.1 | 78.1 | | pen-expert | 130.9 | 134.6 | 144.9 | 144.9 | | door-human | 3.5 | 7.0 | 0.0 | 0.0 | | door-cloned | 1.0 | 2.8 | 0.4 | 0.4 | | door-expert | 106.0 | 106.5 | 102.5 | 105.8 | | hammer-human | 1.5 | 2.5 | 0.3 | 0.3 | | hammer-cloned | 1.4 | 4.1 | 1.1 | 1.1 | | hammer-expert | 127.7 | 130.0 | 127.0 | 127.0 | | relocate-human | 0.0 | 0.6 | 0.0 | 0.0 | | relocate-cloned | 0.0 | 0.2 | -0.1 | -0.1 | | relocate-expert | 106.0 | 108.2 | 107.9 | 107.9 | | **Adroit avg** | 53.1 | 56.6 __(+6%)__ | 52.3 | 53.5 __(+2%)__ | ---------- --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you to the authors, paper looks solid, I'm increasing my score to 7.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their work, hopefully, we provided a sufficient level of response to all, if not, we're open to continue our discussion. Here, we include additional results requested by the reviewers explicitly or implicitly so in order to provide comprehensive empirical results for our answers: - Adding design elements separately to the TD3+BC - Ablating decoupled penalization - Offline-to-online experiments Pdf: /pdf/3b0cf16d9e17b0d43257b43cc2042203fdcd1044.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Online PCA in Converging Self-consistent Field Equations
Accept (poster)
Summary: This paper explores online PCA methods for a certain type of non-linear eigenvalue problem. Strengths: This is a well-written paper on an interesting problem in computational science. Weaknesses: My main critique is that, as presented, the contribution of this paper appears not as much to machine learning or data science, rather to computational science. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you please expand the last paragraph of the conclusion section on how this is a contribution to machine learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. Since our work lies in the applications of online PCA methods to the otherwise unknown area and the specific self-consistent Eigen problem, our work contributes to the machine learning community by expanding the reach of online PCA methods. Before our work, online PCA methods are regarded as specialized methods to handle the stochasticity issue in online or streaming environment. Our work shows that they are also capable of handling self-consistency issues in solving an important class of nonlinear eigenvalue problem. Additionally, the concept of self-consistency is closely related to mean-field theory, which is an active topic in machine learning. Machine learning and physics are closely connected with mutual concepts such as mean-field and Boltzmann machine, which leads to a potential application of our work on a boarder range of machine learning applications involving mean-field characteristics. Moreover, according to the CfP of NeurIPS 2023, > NeurIPS 2023 is an interdisciplinary conference that brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields. and we believe that our work fits in the topics of "Optimization" and "Machine learning for sciences". --- Rebuttal Comment 1.1: Title: response to authors Comment: Since the rest of the reviewers rather liked the paper, and I have no concern regarding its quality, I have raised my score. I do encourage the authors to mention clearly in the paper (possibly in the introduction and not at the end of the paper) the merits of the paper pertaining to machine learning and data science.
Summary: The paper presents a new online-PCA based algorithm with some additional computational innovations to solve self-consistent systems. They add a mode-switching method and delayed calculation to improve convergence issues. The results are very good, but on a somewhat limited/niche dataset. Strengths: The paper is well presented, simple and shows very good results on a task that is seemingly impossible to solve with other methods. Weaknesses: I am not an expert in the application area (electronic structures), but it seems like there could have been more extensive experiments to illustrate the benefits of the method. I don't think sec 4.1 gives a good enough picture of the benefits of the new method. Also, the algorithm boxes on page 4 and 5 are slightly confusing and figure 4 is misplaced and is covering up some text it seems (at least in my printed version). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is the methods that is being compared with in sec 4.2 the only other methods that can solve the SCF problems? Are there any other real-world problems other than electronic structures where this methodology can be used? Non-stationary time series? In Figure 2. Is it possible to explain the intuition behind the role of F? It would help the paper to have an idea what the "typical role" of F in systems such as these is. (maybe this is not possible, but I still ask...) What is DIIS? I did not see the introduction to this and it seems like it suddenly popped up without a definition. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comment. The detailed responses regarding each concern are listed below. For other methods, while multiple existing methods exists, most of them are some variations of the DIIS technique. An incomplete list includes energy-DIIS, augmented-DIIS, LIST, GDIIS and RMM-DIIS, which may behave more efficiently or with better convergency in specific scenarios, and can be good alternatives when standard DIIS fails. However, most of these variations are about including quantum-chemistry-specific information into the DIIS procedure (e.g., energy-DIIS minimizes the Hartree-Fock energy functional, and augmented-DIIS minimizes augmented Roothaan Hall energy function), and these ideas can also be included in our methods. To compare apples to apples, we should also develop corresponding variations of our method from “energy-Adaptive SCF”, “augmented-Adaptive SCF” to “RMS-Adaptive SCF” to make a fair comparison, which could be a bit too exhausted and quantum-chemistry oriented, and may not be of interest to a majority of NeurIPS audiences. We leave works about how these variations affect our proposed method and DIIS as a future work to be presented to quantum chemistry communities. We will correct the misplacement of Figure 4 in the revised paper, sorry for the confusion. The covered sentence is "where $\psi_0$ is the initial angle between the vector and the xy plane." For other problems, we think it may also model the crowd behavior in social science, in which the behavior of an individual will be influenced by the "representative options" of the crowd, and the representative options of the crowd will be influenced by averaging all the change of the individuals. Generally speaking, systems with "mean-field" characteristics may be benefitted from this methodology. For the role of F, while F is the "input" of the problem in eq (1) which could be arbitrarily defined, a typical definition of F in the scenario of electronic structure is defined in eq (9) whose computational detail is shown in the line 41 and 51 in the appendix. For DIIS, it is an convergence acceleration technique introduced in line 71 of the appendix. Sorry for the confusion and we will revise the main paper to include an introduction of DIIS.
Summary: In this work, the authors approach solving the Self-consistent Field (SCF) equation from a principal component analysis (PCA) for non-stationary time series perspective. They shows that, the equilibrium state of such an online PCA corresponds to the solution of the SCF equations. By doing so, this work is abled to achieve better convergence compared to the traditional fixed-point iteration methods for solving such equations. Strengths: - As mentioned in the paper, solving self-consistent Field (SCF) equation is of great significance in computational science for its connection to the Schrödinger equation. So proposing a novel approach, to overcome the non-convergence issues of the traditional fixed-point iteration methods for solving such equations, is important. - The authors also mentioned that, this is the first steps in devising PCA-based algorithms for converging non-linear equations. So further study in this direction can help solving other such relevant problems. Weaknesses: - Please edit line 181 in the manuscript. Some part of the line is omitted by figure 4. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why only the first Eigen vector? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - This paper focuses on solving one important but rather niche problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comment. We will correct the layout error in line 181. The covered sentence is "where $\psi_0$ is the initial angle between the vector and the xy plane." The reason to include only the first eigenvector/eigenvalue in eq (1) is for the simplicity of form, as the main issue of SCF equations -- "self-consistency" is remained. In a more complicated case shown in eq (8), we include top-k eigenvector/eigenvalues and discussed this extension in line 211 and 212.
Summary: This paper proposes a new method for solving self-consistent field equations - a form of nonlinear generalized eigenvalue problem in which the matrix being diagonalized is a function of the eigenvectors of the diagonalization. These equations are of great interest in quantum chemistry, and are typically solved via fixed-point iteration, but can struggle with the stability of the iterative process. This paper proposes a connection to the Principal Component method (PCA) by viewing the function F(v) as a mapping from a vector to a data distribution, wherein the map F acts as a form of decoder/reconstruction function and PCA itself acts as an encoder/compression function. This formulation is entirely equivalent to the original problem, but allows the use of certain modified online/adaptive PCA methods to stabilize the iterative procedure. The authors demonstrate that this method performs superior to vanilla SCF iterations in a specific, theoretically-tractable case study, then apply the method to the more difficult case of solving the Kohn-Sham equations in electronic structure theory. The authors test their method on the QM9 dataset, which contains a large number of molecules for the purposes of electronic structure calculations. They sample 1% of the dataset at random, then evaluate their method on each case in question and compare to the existing SCF interpretation from the PySCF package. They find that their method results in convergence for all molecules considered (compared to 70-90%), while requiring roughly 2-3x more iterations. Strengths: The proposed method is interesting, and applies machine learning techniques to a foundational problem in quantum chemistry. Improving the accuracy or efficiency of Hartree-Fock/DFT calculations would be highly valuable to the quantum chemistry community, and is thus an interesting area of research. The proposed method performs well on the theoretical case study, and the results in table 1 show a clear improvement in convergence rate over the existing baselines under discussion. Weaknesses: It would be helpful to have a better understanding of the significance of these results in the context of quantum chemistry, where they are most likely to be used. The results shown in Table 1 show that the existing baseline achieves a convergence rate of approximately 70-90% depending on the scenario, whereas the proposed approach achieves 100% convergence at a cost of approximately 2-3x the number of iterations. Within the field of quantum chemistry, is failure to converge a significant limitation, and is this tradeoff worth it? While I am not a quantum chemist, my understanding is that DFT calculations are already exceptionally computationally expensive, and significantly increasing the number of iterations required for convergence may be a very severe drawback. In addition, it would be good to see a more thorough comparison. The experiments in this paper are only performed on a single dataset, and only compare to a single baseline. While I, again, am not a quantum chemist, my brief review of the literature revealed a number of existing methods seeing widespread use - including RMM-DIIS (is this the one used as a baseline in the paper?) as well as Davidson or Blocked Davidson iterations, and combination methods combining both RMM-DIIS and Blocked Davidson. Unless I am misunderstanding the applicability of these methods to the problem under consideration, it would be helpful to see a comparison to a broader range of baselines, as well as a wider range of datasets. As is, I feel like the comparisons in section 4 are too narrow to provide a compelling case for the proposed approach, but it is entirely possible that I am missing important context from the field of quantum chemistry. I am happy to revisit this if my understanding is incomplete. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Is convergence rate a primary limiting factor in Hartree-Fock/DFT calculations in practice? If so, how does this compare with the potential downsides of increased computational cost required by your method? Am I correct in assuming that the RMM-DIIS method is the one used as a baseline in the paper? If so, was there a reason other methods were omitted from comparisons? Is there a reason these other methods are not applicable to the problem under consideration? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It would be helpful if the authors gave a more complete presentation of the applicability of the proposed approach in the context of the quantum chemistry literature, and the significance of their results in that context. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comment. The detailed responses regarding each concern are listed below. For the trade-off between convergence and efficiency, note that such a trade-off characteristic can be adjusted in our proposed method by setting different $T_{\text{cut-off}}$ in L230 of the paper. If a larger $T_{\text{cut-off}}$ is set, then the portion of Online SCF will be more dominant, which leads to empirically better convergence with the cost of larger number of iterations. If a smaller $T_{\text{cut-off}}$ is set, then the portion of Regular SCF will be more dominant, which leads to smaller number of iterations for converged molecules, while non-converged molecules are more possible to appear (if $T_{\text{cut-off}}$ is set to be zero, the the proposed method is equal to Regular SCF). In this work, since we are more focused on the convergence side as the title suggests, we set a relatively large $T_{\text{cut-off}}$ to boost the convergence performance. However, the parameter can be set differently to fit practical scenarios. If you are aware that the molecule of your interest may be challenging to get a converged solution, you may wish to set a larger $T_{\text{cut-off}}$ to strengthen the convergence capability of the method, while otherwise you can set a smaller $T_{\text{cut-off}}$ for efficiency. We will elaborate more about this feature of our method in the revised paper. For the comparison, while multiple existing methods exists, most of them are some variations of DIIS technique. An incomplete list includes energy-DIIS, augmented-DIIS, LIST, GDIIS and RMM-DIIS you mentioned, which may behave more efficiently or with better convergency in specific scenarios, and can be good alternatives when standard DIIS fails. However, most of these variations are about including quantum-chemistry-specific information into the DIIS procedure (e.g., energy-DIIS minimizes the Hartree-Fock energy functional, and augmented-DIIS minimizes augmented Roothaan Hall energy function), and these ideas can also be included in our methods. To compare apples to apples, we should also develop corresponding variations of our method from “energy-Adaptive SCF”, “augmented-Adaptive SCF” to “RMS-Adaptive SCF”, which could be a bit too exhausted and quantum-chemistry oriented, and may not be of interest to a majority of NeurIPS audiences. We leave works about how these variations affect our proposed method and DIIS as a future work to be presented to quantum chemistry communities. For the dataset in Sec 4.2, we select QM9 dataset mainly for its sufficient challenge to convergence. Since the focus of our work is on the convergence capability, we intend to set the experiment to be challenging enough to differentiate different methods. We actually did experiments on other datasets, however, preliminary results shows that small datasets like W4-17 [1] are not challenging enough for our experiment, as both our methods and the baseline converges very well. We will mention the result on other datasets in the revised paper. [1] Karton, Amir, Nitai Sylvetsky, and Jan M. L. Martin. “W4-17: A Diverse and High-Confidence Dataset of Atomization Energies for Benchmarking High-Level Electronic Structure Methods.” Journal of Computational Chemistry 38, no. 24 (2017): 2063–75. --- Rebuttal 2: Title: Response to Authors Comment: After seeing the response from the authors, many of my concerns remain unaddressed. Given that the rest of the reviewers had a higher opinion of the work I will raise my rating to borderline, but I still believe that the experimental comparisons are incomplete, and that the benefit of the proposed approach within the context of quantum chemistry compared to existing approaches is unclear. I do think the paper is interesting and has potential, but as is I don't think the paper makes a clear enough case for an improvement relative to the state of the art.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TransHP: Image Classification with Hierarchical Prompting
Accept (poster)
Summary: This paper studies the problem of image classification leveraging the idea of hierarchical image classification (HIC), which exploits semantic relations across target classes to learn meaningful distinctive features. The core idea of hierarchical classification is that if a model should classify plants and it knows that two species of flowers are from the “rose” family, it can focus on determining features that characterize the two classes instead of looking for more general features that help it associate the classes to the flowers category. The authors consider visual transformers as models and introduce a new “prompting block” that aims at modeling the coarse class to which target classes belong so that the features extracted by the transformer are dynamic with respect to this task-and-image-specific information. At an intermediate layer of the transformer, a set of learnable vectors, i.e., prompt tokens which correspond to the number of coarse classes, is prepended to the feature tokens of an image (composed of the class token and the patch tokens). The transformed version of prompt tokens is used to compute the similarity between the tokens and learnable prototypes of the coarse classes via a cross-entropy loss. In this setting, the prompting block explicitly learns the coarse-class prompts while learning to predict the coarse class of the input image. Experiments demonstrate that training transformers with the prompting block(s) improve performances on tasks where a hierarchy can be defined (e.g., Imagenet, iNaturalist). An ablation study shows that embedding the information of the coarse classes without coarse class prompts already helps to improve the results, but prompting does it even more. Compared to the baselines, the methods are more data-efficient. Moreover, a qualitative study shows that the methods put attention to more distinctive features of the images. Strengths: 1. The paper is easy to follow and supports the method explanation with good visualizations. 2. The experiments support the validity of the method and more importantly the claims of the authors 3. The results confirm the intuition of the authors about the capability of focusing on more discriminant features Weaknesses: 1. Although the method consistently works on the tested tasks, what are the characteristics of the tasks that let us understand that the method generalizes? Number of classes? Number of added prompt blocks (determined by the number of hierarchical levels)? Or other characteristics? 2. The application of the method is limited to the cases where we have access to a hierarchy among target tasks. In the question section, I ask if we can show that the method allows us to learn better pre-training embeddings that lead to improvements on other downstream tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors explain that for tasks with multiple hierarchical levels, they add a prompting block for each of them. Do you think it is necessary to exploit all the hierarchical levels? Or maybe the more granular coarse levels are enough for better conditioning? 2. By construction, the method it’s limited to cases where one is able to build a hierarchical relation across target tasks. However, training ViT on Imagenet with the hierarchical information added via prompting block might help in learning a space that generalizes better than standard ViT. Did you try to use TransHP pre-trained on Imagenet as the backbone to finetune with a few labels on other downstream tasks? Maybe fine-grained? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations were mentioned and I asked follow-up questions in a previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we hope our response helps clear up your initial concerns/questions. We would be happy to provide further clarifications where necessary. **1. Although the method consistently works on the tested tasks, what are the characteristics of the tasks that let us understand that the method generalizes? Number of classes? Number of added prompt blocks (determined by the number of hierarchical levels)? Or other characteristics?** Ans: Thanks for the question. The characteristics we conclude may benefit other tasks are: (1) The learning process of prompts. Most of the previous prompting methods usually consist of two stages, i.e., pre-training a base model and then learning the prompts for novel downstream tasks. When learning the prompt, the pre-trained model is usually frozen. We show that an end-to-end training scheme is easier and also works. (2) The meaning of prompts. The prepend prompts of the previous works have no specific meaning. In our work, we show each prompt represents a coarse class. In other works, exploring the meaning of prompts may also be helpful. **2. The application of the method is limited to the cases where we have access to a hierarchy among target tasks. In the question section, I ask if we can show that the method allows us to learn better pre-training embeddings that lead to improvements on other downstream tasks.** Ans: Thanks for the question. Our method (as well as many other HIC methods) has the potential to benefit scenarios without hierarchical labels. It is because we can automatically extend the original annotations into hierarchical annotations, which is very economical. Specifically, as stated in Line 23 - Line 26 in the manuscript, given the fine-grained labels, one can autonomously obtain the coarse labels through taxonomy information (e.g., WordNet) or word embedding from language models. For example, the used coarse labels for ImageNet are generated from WordNet automatically. For the benefit of improvements on downstream tasks, please refer to the below answers. **3. The authors explain that for tasks with multiple hierarchical levels, they add a prompting block for each of them. Do you think it is necessary to exploit all the hierarchical levels? Or maybe the more granular coarse levels are enough for better conditioning?** Ans: Thanks for the question. It is not necessary to exploit all the hierarchical levels, and the more granular coarse levels are enough. In the supplementary of Section E, we conclude that “​​only the last two coarse level classifications (arranged at the 9th and 10th transformer layer) contribute most to the final classification accuracy.” For more details, please refer to that section and Fig. 7. **4. By construction, the method it’s limited to cases where one is able to build a hierarchical relation across target tasks. However, training ViT on Imagenet with the hierarchical information added via prompting block might help in learning a space that generalizes better than standard ViT. Did you try to use TransHP pre-trained on Imagenet as the backbone to finetune with a few labels on other downstream tasks? Maybe fine-grained?** Ans: Thanks for the question. We conclude that the TransHP-pre-trained backbone benefits the downstream task. In Table 1, TransHP (w Pre) denotes using the original pre-trained model; if we change it to the TransHP-pre-trained model, the performance on iNaturalist-2018, iNaturalist-2019, CIFAR-100, and DeepFashion can be further improved by +2.14% (64.21% to 66.35%), +1.27% (71.62% to 72.89%), +1.18% (86.85% to 88.03%), and +0.52% (89.93% to 90.45%), respectively. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! After reading the other reviews and discussions - I am still leaning toward the acceptance of the paper thus confirming my score. One suggestion for the authors is to expand a bit more on questions from other reviewers letting them understand why you made particular choices they are skeptical about. --- Reply to Comment 1.1.1: Comment: Many thanks for your positive rating and suggestions. We will try to address all the other reviewers' concerns.
Summary: This paper aims to improve image classification accuracy by introducing hierarchical prompts. For a dataset with L hierarchy with M prompt at each level, the author proposed to inserting M prompt at each of the L randomly selected transformer layers. Each prompt is tasked to predict the intermediate coarse label in the hierarchy using a supervised manner. While the proposed idea is interesting, its scalability and novelty are limited. See weakness for more details. Strengths: 1. Figure 4 highlights the difference between the proposed method and the baselines. 2. Figure 2 clearly summarizes the high-level idea of the paper. 3. The paper is simple to follow. Weaknesses: Major 1. The proposed method inserted M prompt in the lth prompt block (See Eq5 and Eq6). Since each prompt represents a coarse class (L129-L131) at the lth layer of the hierarchy, the underlying assumption is that the hierarchy is a balanced tree (e.g. each level of the hierarchy has M coarse classes). What will happen if the hierarchy is imbalanced? Will the proposed method still work? The author is suggested to conduct experiments on a dataset that has an imbalanced hierarchy. 2. L103-104 is confusing. Does the model optimize the entire backbone model (e.g. model parameter other than the prompt) + prompt? Or just the prompt? 3. Caption of Figure 2. As shown in Eq2, the prompt is appended after the class and patch token. This contradicts the caption of Figure 2, which claims “the prompting block pre-pends the whole prompt pool consisting of M prompts”. 4. It is unclear how the proposed method TransHP selects an intermediate transformer block for inserting the prompt (L127). As mentioned by the author in L159-L160, there is no explicit guidance on how to select the block for prompt insertion. Furthermore, there is no discussion for the case where the level of dataset hierarchy exceeds that of the transformer blocks. This indicates that the proposed method is not applicable to deep hierarchy. 5. Figure 4 is a bit misleading and there are some missing baselines (L221). For example, one could insert prompts (purple dots in figure 4) and do coarse classification using the [CLS] token. Is the proposed method better than this baseline? 6. The caption of Figure 3 mentioned that the prompt is autonomously selected. However, the Eq5 is nothing but a cross-entropy loss. There is nothing to be selected. 7. How is the proposed method different from a simple baseline that combines HD-CNN [a] into the transformer architecture? 8. The author is suggested to compare with VPT. [a] HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The author is suggested to address the concerns in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitation is mentioned in the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we hope our response helps clear up your initial concerns/questions. We would be happy to provide further clarifications where necessary. We hope our paper is acceptable based on the clarifications and the point-to-point responses below. **1. The proposed method inserted M prompt in the lth prompt block (See Eq5 and Eq6). Since each prompt represents a coarse class (L129-L131) at the lth layer of the hierarchy, the underlying assumption is that the hierarchy is a balanced tree (e.g. each level of the hierarchy has M coarse classes). What will happen if the hierarchy is imbalanced? Will the proposed method still work? The author is suggested to conduct experiments on a dataset that has an imbalanced hierarchy.** Ans: Thanks for the question. We respectfully disagree with that. We do not assume that the hierarchy is a balanced tree: for ImageNet, the number of coarse classes in each hierarchy is 2, 2, 2, 4, 9, 16, 25, 49, 90, 170, 406, and 1000, respectively. The M in each level of the hierarchy is different. The classification in each level is across all coarse classes, such as we perform 170-classes classification in the third to finest layer. We do not meet training problems with such an imbalanced architecture. **2. L103-104 is confusing. Does the model optimize the entire backbone model (e.g. model parameter other than the prompt) + prompt? Or just the prompt?** Ans: Sorry for the confusion. The entire backbone and prompt are optimized together. We will clarify it in L103-104. **3. Caption of Figure 2. As shown in Eq2, the prompt is appended after the class and patch token. This contradicts the caption of Figure 2, which claims “the prompting block pre-pends the whole prompt pool consisting of M prompts”.** Ans: Sorry for the confusion. Eq. 2 shows a classical use of the prompts. The caption of Fig. 2 corresponds to Eq. 3. **4. It is unclear how the proposed method TransHP selects an intermediate transformer block for inserting the prompt (L127). As mentioned by the author in L159-L160, there is no explicit guidance on how to select the block for prompt insertion. Furthermore, there is no discussion for the case where the level of dataset hierarchy exceeds that of the transformer blocks. This indicates that the proposed method is not applicable to deep hierarchy.** Ans: Thanks for the question. In Line 161-163, we give a qualitative principle to set the position for inserting the prompts. If the level of dataset hierarchy exceeds that of the transformer blocks, we suggest focusing on the last few coarse-level classifications. Supplementary D shows that too coarse level hierarchy does not contribute to the final result. **5. Figure 4 is a bit misleading and there are some missing baselines (L221). For example, one could insert prompts (purple dots in figure 4) and do coarse classification using the [CLS] token. Is the proposed method better than this baseline?** Ans: We thank you for this suggestion and provide the result of the required experiment: 77.60%. This performance is similar to the “No prompts” setting and performs a litter better than the baseline while worse than our TransHP. **6. The caption of Figure 3 mentioned that the prompt is autonomously selected. However, the Eq5 is nothing but a cross-entropy loss. There is nothing to be selected.** Ans: Sorry for the confusion. Figure 3 shows the attention value toward the "correct" coarse prompt increases along the training more epochs. We do not apply an explicit loss on this but observe this happening autonomously. **7. How is the proposed method different from a simple baseline that combines HD-CNN [a] into the transformer architecture? [a] HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition** Ans: The differences between our method and other hierarchical image classification methods, such as HD-CNN (with the transformer architecture), lie in **how the mapping function of the deep model is learned**. Specifically, simply changing the backbone of a method does not change how the mapping function is learned. For more details, please refer to Line 78~85. **8. The author is suggested to compare with VPT.** Ans: The differences between our TransHP and the VPT method are significant and fundamental. To be general, VPT duplicates the success of prompt-based efficient tuning from NLP to computer vision and still belongs to the efficient-tuning paradigm. In contrast, our TransHP has nothing to do with efficient tuning: it exploits the semantic hierarchy into prompts and improves image classification under the train-from-scratch paradigm. To be more concrete, our TransHP is significantly different from VPT (and previous efficient-tuning methods based on prompting) regarding four aspects, i.e., the objective, the structure, the prompt selection, and the training process of the prompts (See Line 91~104). --- Rebuttal Comment 1.1: Comment: Thank the authors for providing the rebuttal. However, my major concerns are not well addressed. 7. "The difference ... lie in how the mapping function of the deep model is learned". This is not very clear to me. Prior works in HIC (such as HD-CNN) also pass the coarse-grained prediction into later layers for finer-grained predictions. The idea is the same here. 8. It is true that VPT is designed for parameter-efficient training, but it should not be the reason why such a comparison cannot be made. The authors also adopt ImageNet pretrained model and fine-tune it for other datasets (i.e. the w Pre setting). I think the comparison is needed to justify the significance of the technical contribution. --- Reply to Comment 1.1.1: Comment: Thanks for your time and discussion. We provide extra clarification for the remaining concerns. **7.** Compared to previous HIC methods, a distinct feature of our method is that we condition the subsequential feature extraction on the preceding coarse prediction. In other words, in TransHP, the **mapping** from the input image to the final feature space is dynamically conditioned on the intermediate (coarse) prediction, whereas previous methods, including HD-CNN, employ static feature extraction. This is the primary difference, along with others, such as the prompting mechanism. More specifically, let’s consider the comparison with HD-CNN. Although HD-CNN also utilizes coarse predictions to modify the fine-grained predictions, it doesn’t alter the feature extraction for the fine-grained classes and relies on a heuristic strategy to merge the coarse and fine predictions. In contrast, our TransHP dynamically conditions the feature extraction for fine-grained classes, which consequently influences the fine-grained predictions. We don’t use any heuristic strategy to merge the coarse and fine predictions. **8.** Thanks. Following your instruction, we conduct experiments on our employed datasets with the publicly-released code of VPT. The comparisons are summarised in the below table. Currently, we have only finished the experiments on CIFAR-100. We will update the results for the other datasets in 24 hours. We observe that on CIFAR-100, the VPT does not improve the baseline. This is reasonable because VPT’s benefit is mainly for the training efficiency (when the downstream task has limited data) and barely brings accuracy improvement when the downstream task has sufficient data. In contrast, our method lays no emphasis on fine-tuning efficiency (please kindly note that prompting and prompt-tuning are indeed two different concepts), but brings non-trivial improvement by modeling the hierarchical knowledge into prompting. | Model | iNaturalist-2018 (%) | iNaturalist-2019 (%) | CIFAR-100 (%) | DeepFashion (%) | |-------------|----------------------|----------------------|---------------|-----------------| | Baseline (w Pre) | 63.01 | 69.31 | 84.98 | 88.54 | | VPT (w Pre) | - | - | 82.74 | - | | TransHP (w Pre) | 64.21 | 71.62 | 86.85 | 89.93 |
Summary: The paper proposes to use coarse token prompting for the task of image classification. The basic idea is to add coarse label tokens at intermediate layers of ViT and add additional coarse classification loss at intermediate level. It in some way correlates to convolutional networks based Hierarchical classification which incorporate additional branches for coarse level classification at intermediate levels. However, the idea is applied through prompting and the authors claim that this is the first paper to apply prompting for the task of hierarchical classification. Strengths: - The idea of coarse label prompting appears novel - Apart from some missing details, the paper is well written and the general idea is easy to comprehend - The results show that proposed hierarchical promoting leads to consistent gains across ImageNet, iNaturalist2018/19, CIFAR 100 datasets. Weaknesses: - Related work is not well covered. For instance, there are ideas from prior art which could be applied without much difficulty. For instance, [A] uses marginalization over the predicted fine grained labels to obtain coarse level predictions and applies an additional loss there. [B] simply trains two networks (one for each hierarchy and makes a post hoc correction). Comparison with atleast peer reviewer papers like [A] is warranted. Also a flavour of how these different papers have tackled the problem is worthwhile. Authors do mention that do not compare again convolutional methods, however, giving a general idea on branching and contrasting it with prompting will add value to the paper. - Many details are either not clear or missing. (a) For example, where exactly do they add the prompting, the exact location has to be given. Especially when you use multiple hierarchies, the locations of prompting blocks need to be clarified. Line 159, mentions that they do not have an exact position scheme and lower levels are better. However, they would have used some schema and the authors should mention that? (b) How many hierarchies are used in the main paper results. It should be clearly mentioned. I could gather that for iNaturalist only two levels are used, however, there are no detail regarding other datasets. (c) iNaturalist19 in my knowledge has 7 levels of hierarchy (3-kingdom, 4 phylum, 9-class, 34-order, 57-family, 72-genus, 1010 -species). In supplementary the authors mention to use 6 levels for genus and 1010 for species. Where did the 6 come from? (d) In Table5, supplementary material, how did you get level 11 for iNaturalist dataset? Is the the block number or level number in the Table? - Is it meaningful to do Ablation 2? Thats like sending some random vectors and checking if that helps. Not sure, if that is worthwhile to show. - I could not fully understand the use of absorption weights in equation 7. Is it just used for explainability and statistics (Figure 2) for illustrating how the model is behaving or is it used while training? Please clarify. - Another variation of your method could be to predict something like P^hat_{cls} and use that for coarse prediction. Did authors try this variation, reducing the need of the learnable prototypes w_i. Please clarify. - Other ablations like changing the position of the prompting level or varying the number of hierarchies used, would be valuable. What if only the bottom two hierarchies are used (leaf and just above). Compare that against full hierarchy. [A] Jong-Chyi Su and Subhransu Maji. Semi-supervised learning with taxonomic labels. arXiv:2111.11595, 2021 [B] Kanishk Jain, Shyamgopal Karthik and Vineet Gandhi. Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification arXiv 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please provide clarifications on missing details, as mentioned in limitations sections. Clarify on other points raised in the previous section as well. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Task level limitations are mentioned. However, there is nothing which reflects on the functioning of the proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we hope our response helps clear up your initial concerns/questions. We would be happy to provide further clarifications where necessary. We hope our paper is acceptable based on the clarifications and the point-to-point responses below. **1. Related work is not well covered. For instance, there are ideas from prior art which could be applied without much difficulty... [A] Semi-supervised learning with taxonomic labels [B] Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification** Ans: We thank this valuable related work and suggestions. We reimplement [A] with Transformer as the backbone using its official code on ImageNet, iNaturalist-2018, iNaturalist-2019, CIFAR-100, and DeepFashion datasets. The experimental results are shown in the below table. We also find [B] is a valuable work and will cite [A] in the experimental results and [B] in the related work part. We agree and acknowledge that learning how other papers solve the problem is worthwhile. | Accuracy| ImageNet| iNaturalist-2018 | iNaturalist-2019| CIFAR-100| DeepFashion| |------|------|------|------|------|------| | Baseline | 76.21 | 63.01 |69.31 | 84.98 |88.54 | | [A] | 77.08 | 63.21 | 70.97 | 85.01 | 89.70 | | **TransHP** | **78.65** | **64.21** | **71.62** | **86.85** |**89.93**| We have already compared CNN-based methods with prompting in Related Works (Line 78~ Line 85): we find the fundamental difference between our prompting strategy and other general ideas on branching is how the mapping function of the deep model is learned. For more details, please refer to Line 78~ Line 85. **2. Many details are either not clear or missing. (a) For example, where exactly do they add the prompting, the exact location has to be given. Especially when you use multiple hierarchies, the locations of prompting blocks need to be clarified. Line 159, mentions that they do not have an exact position scheme and lower levels are better. However, they would have used some schema and the authors should mention that. (b) How many hierarchies are used in the main paper results. It should be clearly mentioned. I could gather that for iNaturalist only two levels are used, however, there are no detail regarding other datasets. (c) iNaturalist19 in my knowledge has 7 levels of hierarchy (3-kingdom, 4 phylum, 9-class, 34-order, 57-family, 72-genus, 1010 -species). In supplementary the authors mention to use 6 levels for genus and 1010 for species. Where did the 6 come from? (d) In Table5, supplementary material, how did you get level 11 for iNaturalist dataset? Is the the block number or level number in the Table?** Ans: Thanks for the question. (a) The exact location of prompting blocks is in Table 5 of the supplementary material. Table 5 shows the balance parameters and the exact position of transformer layers to add the prompting. For example, for iNatualist-2019, we add the prompting on the 6th transformer block; and after the 11th transformer block, we perform the finest image classification. (b) The number of hierarchies is also shown in Table 5. We use 12 hierarchies for the ImageNet, 3 hierarchies for the DeepFashion, and 2 hierarchies for the other datasets. (c) Many thanks for pointing out this. The 6 is from the official dataset (train_val2019.tar.gz) downloaded from Kaggle, which contains the training and validation images in a directory structure following {iconic category name}/{category name}/{image id}.jpg. The 6 classes are amphibians, birds, fungi, insects, plants, and reptiles. We also notice that iNaturalist19 has another version of hierarchies, i.e., the 7 levels of hierarchies, as you mentioned. This kind of annotation may further benefit our TransHP due to its higher hierarchies. (d) Sorry for the confusion. It is the transformer block number: 11 means the final fine-grained classification after the final (11th) transformer block. **3. Is it meaningful to do Ablation 2? That's like sending some random vectors and checking if that helps. Not sure, if that is worthwhile to show.** Ans: Thanks for the question. We think that ablation shows that the performance gain is from our prompting design rather than the extra parameters. **4. I could not fully understand the use of absorption weights in equation 7. Is it just used for explainability and statistics (Figure 2) for illustrating how the model is behaving or is it used while training? Please clarify.** Ans: Sorry for the confusion. They are just used for explainability and statistics and NOT for training. We will clarify this in the paper. **5. Another variation of your method could be to predict something like P^hat_{cls} and use that for coarse prediction. Did authors try this variation, reducing the need of the learnable prototypes w_i. Please clarify.** Ans: We thank you for the good suggestion. We add this variation and find its performance similar to ours (78.59% VS 78.65%). **6. Other ablations like changing the position of the prompting level or varying the number of hierarchies used, would be valuable. What if only the bottom two hierarchies are used (leaf and just above). Compare that against full hierarchy.** Ans: Thanks. We have already included this kind of ablation in the supplementary: Fig. 7 shows using two hierarchies achieves 77.71% accuracy, and using three hierarchies achieves 78.50% accuracy, compared to 78.65% gained by all hierarchies. **7. Task level limitations are mentioned. However, there is nothing which reflects on the functioning of the proposed framework.** Ans: Thanks. We think the functioning limitation of our method is “we do not have an exact position scheme for inserting the prompting block.” To mitigate this, we suggest a qualitative principle to set the position for inserting the prompts: if the number of coarse classes is small (large), the position of the corresponding prompting blocks should be close to the bottom (top). For more details, please refer to Line 159~166. --- Rebuttal Comment 1.1: Title: Post rebuttal comment Comment: I have carefully read the author rebuttal. The authors do address some of my comments in a satisfactory manner, hence I am increasing my rating to borderline accept. It appears that the method gives similar performance without the learnable prototypes, that does raise a concern about unnecessary complexity. I would expect that the authors will compute and update all tables without learnable prototypes (directly predicting P^hat_{cls} and using that for coarse prediction). I also suggest the authors to add discussion with other methods like [A] and [B]. Also requesting for some other clarifications: - Are you using only two hierarchies in the iNaturalist dataset? "Amphibians, birds, fungi, insects, plants, and reptiles" appear to be coarse labels not hierarchy. - The prompting with multiple Hierarchies is not entirely clear. Can you expand on that aspect a bit, while clearly mentioning the promoting positions. --- Reply to Comment 1.1.1: Comment: Many thanks for your time and discussion. We sincerely appreciate that you increased your rating and recommended acceptance. We will revise the paper according to your suggestions and answer your two new questions below: **1.** Yes, we use two hierarchies in the iNaturalist dataset, *i.e.*, **hier_1**: 6 coarse classes (iconic category): amphibians, birds, fungi, insects, plants, and reptiles; **hier_2**: 1010 fine classes (category name) **2.** Sure, we will add the implementation details below: **ImageNet:** The ImageNet dataset contains **12** hierarchies, and the Transformer backbone has 12 hierarchies. Therefore, **each** transformer block is used as the prompting block. From bottom to the top, the number of coarse/fine classes are: 2, 2, 2, 4, 9, 16, 25, 49, 90, 170, 406, 1000. **DeepFashion:** The DeepFashion dataset contains **3** hierarchies; according to our qualitative principle, the promoting positions are arranged as follows: the coarsest hierarchy (with 2 classes) is arranged at the **6th** transformer block; the inner hierarchy (with 17 classes) is arranged at the **8th** transformer block; the finest hierarchy (with 7,982 classes) is arranged at the last transformer block. The other three datasets contain **2** hierarchies: **iNaturalist-2018/2019:** the promoting position is at the **6th** transformer block with 14/6 coarse classes; **CIFAR-100:** the promoting position is at the **8th** transformer block with 20 coarse classes.
Summary: This paper introduces a new approach called hierarchical prompting for hierarchical image classification (HIC). It's incorporated into a model named TransHP, which uses broader class 'prompts' to better distinguish between similar classes. The process improves image classification accuracy, data training efficiency, and model explainability. For example, it enhanced ViT-B/16's ImageNet classification accuracy by 2.83%. Strengths: 1. The innovative concept of utilizing additive tokens to garner coarse class data, aimed at enhancing fine visual recognition, holds great potential. 2. The methodology proposed in this paper appears to be a valuable tool in understanding and interpreting the ViT backbone's functioning and structure. Weaknesses: 1. The foundation established in this paper seems to lack strength. I recommend utilizing a more robust baseline within the High Interaction Control (HIC) domain. The chosen backbone, a lightweight Vision Transformer (ViT), does not exhibit wide applicability. It would enhance the argument's validity and strength if the authors considered employing the standard ViT-B as a baseline. 2. The diagram in Figure 4 implies that the bulk of the performance improvement arises from the coarse loss. This renders the necessity of prompt introductions potentially insignificant, as they contribute to less than a single point in improvements. Therefore, the authors should provide a deeper insight into the rationale behind incorporating prompts. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In the method part, what does "soft weighting" mean? If I understand correctly, soft weighting happens without learning any special weighting parameters, instead it means the attention value toward "correct" course prompt increases along the training more epochs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. There appear to be certain limitations to the methods employed, particularly for datasets devoid of HIC structures. Understanding the process of implementing these methods on such datasets would be enlightening. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we hope our response helps clear up your initial concerns/questions. We would be happy to provide further clarifications where necessary. **1. The foundation established in this paper seems to lack strength. I recommend utilizing a more robust baseline within the High Interaction Control (HIC) domain. The chosen backbone, a lightweight Vision Transformer (ViT), does not exhibit wide applicability. It would enhance the argument's validity and strength if the authors considered employing the standard ViT-B as a baseline.** Ans: Thanks for the question. We also have provided experimental results for the main experiments on the stronger baselines: (1) Table 2 shows the performance on ImageNet with ViT-B/16, ViT-L/16, DeiT-S, and DeiT-B. (2) Table 8 in the supplementary shows the performance of our TransHP on ViT-B/16 with four other datasets: iNaturalist-2018, iNaturalist-2019, CIFAR-100, and DeepFashion. (3) The experiments of Fig. 4 settings with DeiT-S and DeiT-B backbones are shown in Section I in the supplementary. All of these experiments show the effectiveness of TransHP with stronger baselines and different settings. According to your suggestion, we will also add more experiments on the stronger baselines to improve the paper further. **2. The diagram in Figure 4 implies that the bulk of the performance improvement arises from the coarse loss. This renders the necessity of prompt introductions potentially insignificant, as they contribute to less than a single point in improvements. Therefore, the authors should provide a deeper insight into the rationale behind incorporating prompts.** Ans: Thanks for the question. Only with coarse labels, the performance is only 77.58% (See Fig. 4 (2)), while with coarse labels and prompts, the performance is 78.65% (See Fig. 4 (4)). When changing the backbone to a stronger one (DeiT, see Supplementary I L424~L426): no prompts bring NO observable improvements, while injecting prompts brings +0.73% (79.82% to 80.55%) improvements for DeiT-S and +0.55% (81.80% to 82.35%) improvements for DeiT-B. **3. In the method part, what does "soft weighting" mean? If I understand correctly, soft weighting happens without learning any special weighting parameters, instead it means the attention value toward "correct" course prompt increases along the training more epochs.** Ans: Thanks for the question. Yes, you are right. We will illustrate “soft weighting” clearer in the revision. **4. There appear to be certain limitations to the methods employed, particularly for datasets devoid of HIC structures. Understanding the process of implementing these methods on such datasets would be enlightening.** Ans: Thanks for this concern. We note that our method (as well as many other HIC methods) has the potential to benefit scenarios without hierarchical labels. It is because we can automatically extend the original annotations into hierarchical annotations, which is very economical. Specifically, as stated in Line 23 - Line 26 in the manuscript, given the fine-grained labels, one can autonomously obtain the coarse labels through taxonomy information (e.g., WordNet) or word embedding from language models. For example, the used coarse labels for ImageNet are generated from WordNet automatically. We believe the extra benefit from economic annotation cost is an important reason that the HIC task is highly regarded. --- Rebuttal Comment 1.1: Comment: Thank you for the author's rebuttal; I will maintain my current score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This works presents a novel hierarchical prompting mechanism for hierarchical image classification, named TransHP. In TransHP, a set of prompt tokens are learnt to represent coarse classes and was injected in the prompting block for coarse class prediction, and the injected prompt token can strengthen the feature. The proposed method was evaluated in several benchmarks and show consistently improvements compared with baseline model. Strengths: - This work is well-written and well-organized, and the proposed method is simple and easy to implement. - Good performance compared to previous methods and baseline model. - Visualization in Fig. 5 gives intuition to understand the proposed method. - Studies on data-scarce scenario is good. Weaknesses: - The ImageNet performance of HiMulConE used in this work is even worse than the official paper that adopt ResNet-50. - Lacking intuitive explanation on the data efficiency of the proposed method. - Table 4 shows that in data-scarce senario, the performance of all methods decreases significantly, and the results suggest that TransHP has better performance. However, I wonder if the settings used in this work is too weak? This is because recent semi-supervised work has shown that very high accuracy can be achieved with only 10% of ImageNet data, and some self-supervised pre-training methods (\eg MIM) can also alleviate the problem of data scarcity. - The main experiments are conducts on a lightweight ViT, which is a weak baseline. It would be best to conduct more experiments on a stronger backbone & pre-train models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I hope the author could address my questions in the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. The author adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we hope our response helps clear up your initial concerns/questions. We would be happy to provide further clarifications where necessary. We hope our paper is acceptable based on the clarifications and the point-to-point responses below. **1. The ImageNet performance of HiMulConE used in this work is even worse than the official paper that adopts ResNet-50.** Ans: Thanks for the question. This phenomenon is reasonable because HiMulConE in our paper uses a lower baseline (ViT, 76.21 top-1 accuracy) instead of the original baseline (ResNet, 77.60 top-1 accuracy) in its official paper. Specifically, in its official paper, HiMulConE improves ResNet-50 from 77.60 to 79.14 (+1.54). In our paper, based on ViT (76.21), HiMulConE achieves a similar improvement (+1.31=>77.52) but is indeed lower than in its official paper. **2. Lacking intuitive explanation on the data efficiency of the proposed method.** Ans: We apologize for the missing. The efficiency of the proposed method in terms of data efficiency can be explained intuitively by drawing upon two perspectives, one *philosophical* and the other *technical*. **Philosophical Perspective:** Imagine knowledge as the essence of everything humans have summarized over time. When you possess knowledge, you have the distilled essence of myriad experiences and learnings. The proposed method leverages this accumulated knowledge. In scenarios where data is limited, the power of such distilled knowledge becomes even more pronounced. **Technical Perspective:** Now, think of data not just as isolated pieces of information but in categories. Even when the dataset might seem limited, there could still be ample samples within broader categories. This means that for these 'coarser' categories, accuracy can be achieved rapidly. Once the accuracy at this coarse level is established, the model can then use this foundation to prompt further. It's like planting a tree - you start with a strong base and then branch out. **3. Table 4 shows that in data-scarce scenario, the performance of all methods decreases significantly, and the results suggest that TransHP has better performance. However, I wonder if the settings used in this work is too weak? This is because recent semi-supervised work has shown that very high accuracy can be achieved with only 10% of ImageNet data, and some self-supervised pre-training methods (\eg MIM) can also alleviate the problem of data scarcity.** Ans: Thanks for the question. The data-scarce scenario in our setting differs from theirs: semi-supervised works leverage unlabeled data (100% of ImageNet data) as well as labeled data (such as 10% of ImageNet data) to increase classification performance; self-supervised pre-training models use 100% of unlabeled data. Ours are with N% of ImageNet training data and None of the unlabeled data. In conclusion, we focus on the **data-scarcity** problem, while theirs still use all of the data and focus on the **label-scarcity** problem. **4. The main experiments are conducted on a lightweight ViT, which is a weak baseline. It would be best to conduct more experiments on a stronger backbone & pre-train models.** Ans: Thanks for the question. We also have provided experimental results for the main experiments on the stronger baselines: (1) Table 2 shows the performance on ImageNet with ViT-B/16, ViT-L/16, DeiT-S, and DeiT-B. (2) Table 8 in the supplementary shows the performance of our TransHP on ViT-B/16 with four other datasets: iNaturalist-2018, iNaturalist-2019, CIFAR-100, and DeepFashion. (3) The experiments of Fig. 4 settings with DeiT-S and DeiT-B backbones are shown in Section I in the supplementary. All of these experiments show the effectiveness of TransHP with stronger baselines and different settings. According to your suggestion, we will also add more experiments on the stronger baselines to improve the paper further.
null
null
null
null
null
null
Combinatorial Optimization with Policy Adaptation using Latent Space Search
Accept (poster)
Summary: This paper presents an interesting CO agent model that conditions policy by latent vectors and finds latent vectors through CMA-ES. In addition, considering latent vector in the learning process, this paper presents a training method that induces the agent to be specialized for various instances. Strengths: 1. The idea of ​​using latent vectors to condition the policy and updating latent vectors via CMA-ES during inference is novel. 2. The training method with this inference method in mind is also interesting. Weaknesses: 1. Inference time is not included in Table1. Since COMPASS is a method of continuously finding a better solution during inference, it is crucial to include the inference time in the experimental results. 2. It is difficult to view the experimental results of COMPASS as state-of-the-art results considering EAS (Hottung et al., 2022), SGBS+EAS (Choo et al., 2022) and DPDP [1] in CVRP experiments. 3. Overall, the description of the model structure, training method, and inference method lacks details. *** [1] Kool, Wouter, et al. "Deep policy dynamic programming for vehicle routing problems." Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could you elaborate more on the conditioned decoder? 2. Since Equation (1) lacks a baseline, the learning stability is likely to be poorer compared to methods such as POMO and Puppy, which use a baseline in gradient calculation. Is there a particular reason for not using a baseline? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and positive feedback. We have updated the paper accordingly and hope our answers further clarify the aspects of the COMPASS framework and the training procedure. > W1: Inference time is not included in Table1. Since COMPASS is a method of continuously finding a better solution during inference, it is crucial to include the inference time in the experimental results. We agree that this is an important context and will therefore include the inference times of all methods and CO problems (previously reported in the Appendix Table 2) to Table 1 of the main text. > W2: It is difficult to view COMPASS as SOTA considering EAS (Hottung et al., 2022), SGBS+EAS (Choo et al., 2022) and DPDP (Kool et al., 2022) in CVRP experiments. Whilst EAS and SGBS+EAS do provide stronger adaptation to larger CVRP instances, we emphasize that this performance comes also with practical trade-offs and that the totality of all experiments strongly supports COMPASS as the leading method. Concretely, EAS has orders of magnitude more adaptable parameters that must be re-trained (with non-negligible overhead and scalability challenges) on every considered instance - we refer the reviewer to our response to point W4 of reviewer ubWG for a more detailed discussion of the relative trade-offs of EAS. Moreover, on 9 out of 11 standard benchmarking tasks and all 18 generalization tasks, COMPASS outperforms the prior state-of-the-art approaches. With respect to the specific methods raised by the reviewer; (i) COMPASS outperforms the results presented by DPDP [1] on both TSP100 and CVRP100, (ii) COMPASS could, in principle, be combined with SGBS and EAS to further improve performance as the distribution of specialized policies encoded in the latent-space could still be used in conjunction with beam-search or fine-tuning at inference time. > W3: Overall, the description of the model structure, training method, and inference method lacks details. Details on these points are provided in the Appendices of the original submission, however we accept that they are important context and will ensure each is either moved into, or explicitly referenced in, our next revision of the main text. Specifically; - The model architecture for TSP and CVRP is fully described in Appendix A.5. The JSSP model is described in Appendix A.12. - The training procedure is described in the Methods section of the main paper, which refers to Appendix A.7 for further details including pseudo-code for, and step-by-step details of, the algorithm. - The inference procedure is described in the Methods section of the main paper, which references Appendix A.8 for additional details and discussion of alternatives inference-time search protocols. > Q1: Could you elaborate more on the conditioned decoder? The decoder is conditioned on a vector sampled from a 16-dimension latent space. This latent vector is concatenated with the key, query, and value inputs of the multi-head attention decoder module. This conditioning allows us to create distinct policies while processing the same observation from the environment. Each latent vector corresponds to a unique policy, and thus, sampling the latent space to obtain vectors that our model can condition upon gives us an infinite set of policies. The conditioned decoder is described in Appendix A.5. As discussed in our response to W3, we will ensure that the updated manuscript contains a brief description regarding the conditioned decoder in the methods section along with an explicit reference to Appendix A.5 for further details. > Q2: Since Equation (1) lacks a baseline, the learning stability is likely to be poorer compared to methods such as POMO and Poppy, which use a baseline in gradient calculation. We thank the reviewer for highlighting this oversight; in fact COMPASS does use a baseline in the gradient calculation; specifically it is the same baseline used in POMO and Poppy. We will updated Equation (1), which defines the gradient of the COMPASS objective, to the following: $$ \\nabla\_\\theta J\_{\\text{compass}} = \\mathbb{E}\_{\\rho \\sim \\mathcal{D}} \\mathbb{E}\_{z\_1, ..., z\_N \\sim \\mathcal{P}\_z} \\mathbb{E}\_{\\tau\_i \\sim \\pi\_\\theta(\\cdot |z\_i)} [\\nabla\_\\theta \\log \\pi\_\\theta(\\tau\_{i^\\star} | z\_{i^\\star})R\_{i^\\star} - \\mathcal{B}] $$ where B is the baseline. --- [1] Kool et al. "Deep policy dynamic programming for vehicle routing problems." CPAIOR (2022). --- Rebuttal Comment 1.1: Comment: I appreciate the authors providing detailed responses. I have two questions regarding Table 1 in the newly attached PDF. (1) It appears that POMO is POMO Sampling. If this is correct? I suggest using the term "POMO Sampling" or "POMO(sampling)" to reduce confusion with POMO Greedy, if the POMO in Table 1 refers to POMO Greedy, then the execution time is excessively long. (2) The authors included SGBS as a baseline, not SGBS+EAS from the SGBS paper. Since COMPASS iteratively finds solutions during inference time, I believe it would be more appropriate to include SGBS+EAS as a baseline rather than just SGBS. Is there a specific reason for choosing to include SGBS instead of SGBS+EAS as the baseline? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments and hope our answers provide further clarity. > (1) Clarification concerning POMO Sampling. We use POMO Sampling, and we will change it to “POMO (sampling)” for clarity. > (2) The authors included SGBS as a baseline, not SGBS+EAS from the SGBS paper. Since COMPASS iteratively finds solutions during inference time, I believe it would be more appropriate to include SGBS+EAS as a baseline rather than just SGBS. Is there a specific reason for choosing to include SGBS instead of SGBS+EAS as the baseline? EAS is an orthogonal approach to both COMPASS and SGBS, as during inference, both methods can be combined with EAS to finetune on instances. Since both COMPASS and SGBS follow from the POMO model architecture and at inference, employ a novel search method to find better quality solutions, we believe it is fair to compare the two approaches. However, we are happy to provide additional comparison to SGBS + EAS; detailed for TSP and CVRP in the below table (for instance sizes reported in [1]). Overall, the addition of SGBS + EAS improves the optimality gap with respect to EAS alone, and leaves the overall comparison to COMPASS unchanged. Concretely, COMPASS outperforms EAS and SGBS + EAS on TSP and in-distribution CVRP; whilst taking significantly less time. As discussed in section 4.1 of the paper (and in answer to W4 - reviewer ubWG), the additional capacity of EAS allows stronger adaptation to larger out-of-distribution CVRP instances. We will include these extended results in our revised manuscript. As we cannot update the pdf file with our additional results, we report below the tour length, gap to optimality, and runtime for SGBS+EAS and COMPASS. ### TSP: | Method | 100 | 150 | 200 | |-------------|------|------|------| | SGBS+EAS | 7.767, 0.035% (3H) | 9.359, 0.136% (1H) | 10.727, 0.378% (3H) | | COMPASS (ours) | 7.765, 0.002% (2H) | 9.350, 0.043% (32M) | 10.723, 0.337% (70M) | ### CVRP: | Method | 100 | 150 | 200 | |-------------|------|------|------| | SGBS+EAS | 15.594, -0.36% (6H) | 19.168, -0.19% (2H) | 21.988, -0.07% (5H) | | COMPASS (ours) | 15.594, -0.36% (4H) | 19.313, 0.49% (1H) | 22.462, 2.10% (100M) | [1] Choo et al., Simulation-guided beam search for neural combinatorial optimization. NeurIPS (2022).
Summary: Building upon a pre-trained neural constructive model (such as POMO), this paper proposes COMPASS, which introduces the idea of learning a continuous latent search space to fine-tune the pre-trained POMO model parameter. The latent space allows for the sampling of a vector, which the pre-trained POMO model uses as a conditional vector to generate its own parameters. After training such latent space, continuous optimization algorithms (such as CAM-ES) can be utilized to search this space, in order to yield the most performant POMO model parameters for each test instance during inference. This allows for per-instance search during inference, while avoiding the need to retrain the deep model for each new test data (as is the case with active search in EAS). Experiments on benchmarks verify that COMPASS outperforms the state-of-the-art baselines. Strengths: - The concept of learning a latent search space for fine-tuning parameters of an NCO model is novel and could potentially impact the NCO community positively. However, it is important to acknowledge that the idea of learning a latent search space followed by using continuous optimizer to search within the space is not new, as evidenced by the CVAE-Opt method (ICLR’21). - The authors conducted comprehensive experiments, supported by detailed tables, figures, and useful visualizations. - COMPASS achieves state-of-the-art performance on benchmark TSP-100 and CVRP-100 instances. Weaknesses: - Although the authors provided reasons, I still think it would be useful to benchmark COMPASS against POMO and EAS equipped with data augmentation, to gain a complete understanding of COMPASS's advantages. That is to say, all baseline should be at their best settings. Furthermore, the recent method SGBS (simulation guided beam search, NeurIPS’22) is also overlooked. - The method by which POMO is conditioned on the vector sampled from the latent space is unclear. Specifically, given a pre-trained NCO model, how should the user determine which parameters to condition on in practice? - The literature review lacks comprehensiveness, missing important works like CVAE-Opt, SGBS, and other recent works. - COMPASS's generalization to larger sizes (CVRP-200) appears less efficient than EAS. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does the run time include the time taken for CAM-ES? If so, what is the ratio of CAM-ES time to the total time? 2. In your comparison table, were all results obtained on your server? If so, could you provide the CPU/GPU model type of your serverfor readers to fully understand the efficiency? 3. How long does it take to train COMPASS and how many computational cost? 4. It would be interesting to investigate if COMPASS could enhance neural improvement heuristics like LIH, thereby yielding variable neighborhood search. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The revised paper should mention more limitations and future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. > W1: ... it would be useful to benchmark COMPASS against POMO and EAS with data augmentation. Furthermore, SGBS … is overlooked. We are happy to provide additional benchmarking to allow comparison to published results and validate our implementations. These are provided in the attached pdf (Table 1) and will be added to the Appendices of a revised manuscript. We believe not using augmentations for the results in the main text remains suitable, as discussed below. **Benchmarking: augmentations and SGBS** The key messages are unchanged from the previously reported results. - COMPASS outperforms all baselines with instance augmentation for all TSP instance sizes. - COMPASS is the leading method for in-distribution CVRP, with EAS providing stronger adaptation on larger instances (we note the adaption of EAS does not come without tradeoffs as discussed in our response to W4). - We also add SGBS [1] and CVAE-Opt [2]. COMPASS significantly outperforms both methods for all instance sizes of TSP and CVRP. **Augmentation-free results** We reaffirm the motivation behind running COMPASS and the baselines without instance augmentation. - Instance augmentation is a domain-specific trick which cannot be used for all problems (e.g. JSSP, Knapsack). - Baseline methods still benefit from a strong exploration as they use multiple starting points [3] and a substantial search budget. - Using augmentations complicates fair comparisons with inference-time search methods. Typically, 8 augmentations are used based on [3]. However, this number is arbitrary. More augmentations increase the number of samples from a specific policy, limiting the number of adaptive steps for methods like EAS and COMPASS. > W2: How is POMO conditioned on the latent space… how to determine which parameters to condition on in practice? The decoder is conditioned on a vector sampled from a 16-dim latent space and is fully described in Appendix A.5. Specifically, the latent vector is concatenated with the key, query, and value inputs of the multi-head attention. For clarity we will add a brief description in the Methods section with reference to A.5 for further details. Whilst COMPASS is agnostic to the network architecture, we did not extensively explore alternative conditioning methods and leave this to future work. > W3: The literature review [is] missing important works like CVAE-Opt, SGBS and other recent works. Our literature review focused on RL construction methods, but we agree that a broader review can benefit future readers. Therefore, we will include discussions of CVAE-Opt and RL improvement methods. SGBS is already included in the Related Work (L. 90-91). We also provide CVAE-Opt and SGBS as additional baselines in the attached pdf (Table 1), which will also be included in the revised manuscript. > W4: COMPASS's generalization to larger sizes (CVRP-200) appears less efficient than EAS. EAS does adapt to the largest CVRP instances more effectively, however to achieve this it (1) adapts orders of magnitude more parameters (e.g. for CVRP200, EAS adapts 200*128=25.6k parameters compared to the 16-dim latent vector of COMPASS), (2) requires computationally expensive test-time training. Increasing the capacity of COMPASS (e.g. a larger latent space) can be explored in future work and we note that COMPASS and EAS are not mutually exclusive, and so could be combined. Despite these points, COMPASS outperforms EAS on in-distribution CVRP, all considered TSP and JSSP instance sizes and all the 18 generalization tasks (where instances are procedurally transformed to be out-of-distribution). > Q1: Does the run time include the time taken for CMA-ES? What is the ratio of CMA-ES time to the total time? Run times include CMA-ES steps, though this adaption makes a negligible contribution to the overall timings (in contrast explicit fine-tuning methods, e.g EAS). Details are provided in Table 2 of the attached pdf and will be included in a revised Appendix. > Q2: In Table 1, were all results obtained on your server? Could you provide the CPU/GPU model type…? COMPASS’ results and all baselines (POMO, POPPY and EAS) were computed by us using a v3-8 TPU. We used previously released checkpoints for these models to ensure consistency. We will update the main paper to include these details. > Q3: How long does it take to train COMPASS and how much computational cost? The final COMPASS models are trained until convergence, for each problem the training time and environment steps are: 4.5 days (110M steps) for TSP, 5.5 days (76.5M steps) for CVRP and 4.5 days (4.2M steps) for JSSP. These details will be added to the revised manuscript. > Q4: It would be interesting to investigate if COMPASS could enhance neural improvement heuristics like LIH, thereby yielding variable neighborhood search. We agree this is a promising future direction. In LIH, finding a diverse set of candidate solutions is also desired (see section V.C of [4]). As COMPASS is applicable to any pretrained model, instead of using the same policy stochastically several times (multi-run), or a selection of, likely similar, policies found during training (multi-policy), LIH could use COMPASS to promote diversity and subsequently improve performance. > The revised paper should mention more limitations and future works. Appendix A.14 discusses limitations and future work, however we accept this should be presented more prominently. In the revised text we will update the conclusion accordingly. --- [1] Choo et al., Simulation-guided beam search for neural combinatorial optimization. NeurIPS (2022). [2] Hottung et al., Learning a latent search space for routing problems using variational autoencoders. ICLR (2020). [3] Kwon et al., Pomo: Policy optimization with multiple optima for reinforcement learning. NeurIPS (2020). [4] Wu et al., Learning Improvement Heuristics for Solving Routing Problems. IEEE (2022). --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I appreciate the authors for the detailed reply. I have two remaining points: 1. As pointed out by Reviewer GNoz as well, it would be beneficial if the authors could showcase the performance of COMPASS against the SGBS+EAS+augmentation (which is the current state-of-the-art). 2. I remain of the opinion that augmentation is a useful and effective technique to enhance constructive solvers (to escape local minima). Regarding the new results: * Why does 'COMPASS (aug)' underperform compared to 'COMPASS (ours)' on CVRP? * Why are computation times identical for both augmented and non-augmented versions? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their feedback and comments, and hope our answer provides further clarifications. > 1. As pointed out by Reviewer GNoz as well, it would be beneficial if the authors could showcase the performance of COMPASS against the SGBS+EAS+augmentation (which is the current state-of-the-art). We are happy to provide additional comparison to SGBS + EAS; detailed for TSP and CVRP in the below table which includes the tour length, gap to optimality, and runtime (for instance sizes reported in [1]). In general, the results leave the overall comparison to COMPASS unchanged. COMPASS outperforms SGBS + EAS on TSP and in-distribution CVRP; whilst taking significantly less time. The additional capacity of EAS allows SGBS + EAS a stronger adaptation to larger out-of-distribution CVRP instances. These results will be included in our revised manuscript. Lastly, we would like to reaffirm that EAS is an orthogonal approach to COMPASS and SGBS, in particular, EAS could be added to COMPASS just like it is added to SGBS. ### TSP: | Method | 100 | 150 | 200 | |-------------|------|------|------| | SGBS+EAS | 7.767, 0.035% (3H) | 9.359, 0.136% (1H) | 10.727, 0.378% (3H) | | COMPASS (ours) | 7.765, 0.002% (2H) | 9.350, 0.043% (32M) | 10.723, 0.337% (70M) | ### CVRP: | Method | 100 | 150 | 200 | |-------------|------|------|------| | SGBS+EAS | 15.594, -0.36% (6H) | 19.168, -0.19% (2H) | 21.988, -0.07% (5H) | | COMPASS (ours) | 15.594, -0.36% (4H) | 19.313, 0.49% (1H) | 22.462, 2.10% (100M) | > 2.1 Why does 'COMPASS (aug)' underperform compared to 'COMPASS (ours)' on CVRP? To facilitate a fair comparison between COMPASS with and without augmentations we maintain a constant budget for the overall number of samples (see our answer to the question below); therefore with 8x augmentations the search procedure of COMPASS has 8 times fewer CMA-ES optimization steps. The improvement of ‘COMPASS (ours)’ over ‘COMPASS (aug)’ highlights that our method performs better when dedicating additional steps to adapting the policy, rather than exploration via augmentations. > 2.2 Why are the computation times identical for both augmented and non-augmented versions? The computation budget of augmented and non-augmented COMPASS is fixed (i.e. both methods are allowed to generate the same number of candidate solutions). This enables a direct comparison between these approaches that is consistent with existing literature in terms of budget use. [1] Choo et al., Simulation-guided beam search for neural combinatorial optimization. NeurIPS (2022).
Summary: This paper proposes COMPASS, an RL-based training framework to learn a diversified neural solver for combinatorial optimization problems. This framework trains a conditioned neural network conditioned on a prior vector sampled from a fixed distribution. During the training phase, multiple priors are sampled, and only the parameters corresponding to the best prior are updated. During the inference phase, the evolution algorithm is employed to find the best prior for the current instance. The authors test its method on TSP, CVRP, and JSSP. The experiment design is solid, and the results are promising. Strengths: 1. The resources, e.g., memory and time, used in the inference time are shorter than the current SOTA from its low-dimensional prior. 2. The learned conditional neural solver has the potential to generalize to the out-of-training distribution instances. That is also somehow verified from the section 4.2 experiments. Weaknesses: 1. One key challenge in the RL for CO area is how to train/generalize a model to the real big cases, e.g., TSP1000/10000. This framework is designed to improve upon another neural solver. However, it cannot solve the real big cases. 2. In the inference time, the priors are sampled and selected using the evolution algorithm. This is somehow like a search-related method. It is not clearly verified "the improvement comes from a good conditional neural solver or the strong evolution algorithm". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Like mentioned in weakness (2), I am wondering what's the results of comparing COMPASS with COMPASS base neural solver + beam search/heuristic search. The traditional search can use fewer resources. 2. For the deep learning-based baselines, comparing time is insignificant. But I am wondering about the time cost of COMPASS. This can give me more confidence to evaluate how efficient COMPASS is. And to evaluate whether it can be used for real-world problems. 3. To my knowledge, TSP 100 and TSP 200 do not makes too much difference for a neural solver. I am wondering what's the results of testing on TSP 1000. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and hope that our answers and additional experiments will clarify any concerns. > W1: Key challenge in RL for CO is to train/generalize a model to big cases, e.g. TSP1000/10000. This framework is designed to improve upon another neural solver [but] cannot solve big cases. We agree with the reviewer that scaling models to larger instances is an important challenge. However, whilst this is not a key focus of our work, we nonetheless believe that our method can contribute towards this goal. Concretely, two crucial aspects for tackling larger instance sizes are generalization and scalable architecture. Our method is SOTA for generalization (see Fig. 3), with scalable latent space adaptation that is independent of instance size). Furthermore, our method is architecture-agnostic. Although we do not innovate on architecture, we can benefit from any new scalable architecture; e.g. nothing prevents us from using COMPASS on DIMES [1] and hence solve much larger instances, although this is beyond the scope of this work. > W2: It is not clearly verified whether the improvement [at inference time] comes from a good conditional neural solver or the strong evolution algorithm. Our experimental results provide evidence that both aspects – (1) a well-trained conditional neural solver and (2) an efficient search algorithm – are critical for strong performance. It can be seen in both the reported results in the paper and the attached pdf with additional experiments. We agree this is a crucial point and will therefore update the manuscript accordingly. **Previous results** Point (1) is illustrated by Fig. 4 (main) which shows high-performing regions for a given instance. Point (2) is demonstrated by Fig 9 (appendix) which shows the principled search method significantly outperforms random search. Additionally, we see that the random search outperforms POMO and Poppy, confirming that the latent space “contains” high-performing and diverse policies. **New Exp. 1** This is further illustrated in Fig. 1 (attached pdf) which compares two COMPASS models (fully- vs. under-trained) solving TSP150 instances with two search methods (CMA-ES and uniform sampling). The results demonstrate that: - Both search methods for fully trained model outperform those for under-trained model, showing the importance of our training procedure. - Uniform search on the fully-trained solver outperforms CMA-ES search on the under-trained model, showing that the search alone is not sufficient. **New Exp. 2** Fig. 2 (attached pdf) presents the evolution of the latent space during training on a TSP150 instance. It can be seen that initially the space is uniform (no specialized regions exist). However, as training progresses, high-performing regions emerge (shown in red) which indicates specialization of policies within the latent space, and we also see the improved performance of the best conditioned policy. > Q1: What are the results of comparing COMPASS with COMPASS base neural solver + beam search/heuristic search. This can be approximated by comparing with SGBS, a heuristic approach to improve the performance of POMO [2]. A naive application of this heuristic on COMPASS (sampling a random POMO policy with no latent space search) is equivalent to POMO+SGBS. We report the results of POMO+SGBS in Table 1 (attached pdf) and show that COMPASS outperforms POMO+SGBS on the whole benchmark. This validates that it is worth searching for a good latent condition with the budget rather than fixing a random policy and using a beam search. Nevertheless, there may be a trade-off between search in latent space and heuristic solution search, which we will mention in the updated manuscript. > Q2: What is the time cost of COMPASS? Time performance is reported in Table 2 (appendix), but we will update Table 1 (main) with the times. These results show that (i) COMPASS is as fast as POMO and POPPY (time cost of CMA-ES search is insignificant) (ii) COMPASS is significantly faster than EAS, e.g. 4x faster on CVRP 200. The adaptation mechanism of COMPASS comes with negligible time cost. We ran additional experiments to time the solution construction process of COMPASS. Those are reported in Table 2 of the attached rebuttal file and will be added to the Appendix. In a complete rollout of TSP100, the CMA-ES sampling and update takes 0.28 milliseconds, which is three orders of magnitude smaller than the time of the 99 decoding steps (298 ms). > Q3: To my knowledge, TSP100 and TSP200 do not make too much difference for a neural solver. I am wondering what the results of testing on TSP 1000 are. We believe that the difference between TSP100 and TSP200 is important, especially when considering generalization to larger instances as all checkpoints are trained on TSP100. Additionally, TSP 125, 150 and 200 are commonly used benchmark sets from the literature [3-5] and therefore valuable to report for consistency and to enable direct comparison. We agree scaling to larger instances is a prescient challenge which we discuss in the context of COMPASS in our response to W1. As an initial examination on TSP1000, we run COMPASS, Poppy and POMO on the instances from [1], taking 7H each. COMPASS outperforms them by a significant margin (29.28 vs. 39.03 & 50.02). EAS-Emb is intractable on TSP1000, but EAS-Tab is reported in [1] and is largely outperformed as well (49.56 in 63.45H). --- [1] Qiu et al., DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems. NeurIPS (2022). [2] Choo et al., Simulation-guided beam search for neural combinatorial optimization. NeurIPS (2022). [3] Grinsztajn et al., Population-based reinforcement learning for combinatorial optimization. (2022). [4] Hottung et al., Efficient active search for combinatorial optimization problems. ICLR (2022). [5] Kwon et al., Pomo: Policy optimization with multiple optima for reinforcement learning. NeurIPS (2020). --- Rebuttal Comment 1.1: Comment: Thanks for the explanations and new experiments. All my questions are somehow answered.
Summary: The paper proposes a neural combinatorial optimization approach that allows for an extensive search for high-quality solutions. The approach uses reinforcement learning to train a network to construct solutions for the traveling salesman problem (TSP), capacitated vehicle routing problem (CVRP), and job shop scheduling problem (JSSP). During training, the network is trained to parameterize a diverse set of policies that are conditioned on a continuous latent space. At test time, a guided search is performed using Covariance Matrix Adaptation (CMA-ES) to find regions in the continuous latent space that are associated with policies leading to high-quality solutions. The experiments indicate that the proposed method provides a competitive performance and is able to generalize well to instances that are different from those seen during training. Strengths: - The results show that the method offers a very good performance on all considered problems. - The paper addresses an important problem (designing a neural combinatorial optimization approach that is able to perform an extensive search). - The authors perform some interesting generalization experiments that go beyond changing only the instance size and also considers other shifts in the distribution. - The authors will make the code of their method publicly available. - Overall, the paper is well written and clearly organized. Weaknesses: - The proposed method is very similar to the method from [Hottung et al.] which also trains a neural network conditioned on a continuous latent space to construct diverse solutions to routing problems. At test time, both methods search the continuous latent space using continuous optimization methods (in this work CMA-ES, in Hottung et al. differential evolution.) The authors should make this clear in the paper and discuss the differences between the methods. - In general, the paper omits discussing most of the existing works on learning extensive search methods (or improvement methods) for combinatorial optimization problems. Instead the authors conclude that “[..] the field has reached a point where methods [...] can hardly make significant improvements [over quickly generated solutions when] given a budget for additional computation.” (page 2) In fact, many approaches have been proposed that aim to exploit bigger computation budgets and that perform an extensive, guided search: - Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. Advances in Neural Information Processing Systems 32, 2019. - André Hottung and Kevin Tierney. Neural large neighborhood search for the capacitated vehicle routing problem. European Conference on Artificial Intelligence, pages 443–450, 2020. - Kim, Minsu, and Jinkyoo Park. "Learning collaborative policies to solve NP-hard routing problems." Advances in Neural Information Processing Systems 34 (2021): 10418-10430. - Ma, Yining, et al. "Learning to iteratively solve routing problems with dual-aspect collaborative transformer." Advances in Neural Information Processing Systems 34 (2021): 11096-11107. - The reported performance of the baseline approaches is significantly worse than in their original works. - The validity of the experimental results is limited by the fact that the authors remove a core component from the considered baseline approaches. More precisely, the authors do not use instance augmentation (which considers 8 different augmentations of each test instance during the search). Using augmentations of an instance is an established way of increasing the exploration during the search, because neural network based construction methods tend to generate different solutions for each augmented version of an instance). While it can be argued that an augmentation mechanism is a “domain-specific trick” it also can be considered unfair to remove a component from a baseline approach that encourages exploration without replacing it with a different component that increases exploration. Most methods will perform worse if a component that the developers considered as given when designing the method (even if it can be easily replaced by something else) is removed. Hence, I suggest that the authors also report results for the baselines with augmentation. - The authors reimplement the baseline approaches even though their code is publicly available. This has some pros and cons. On the plus side, implementing all approaches within the same code base allows a fair comparison of the runtime unaffected by implementation tricks or differences in the speed of the used framework. However, (to my surprise) the authors do not report any runtimes in the main paper. On the negative side, there is a risk that the implementation does not work as well as the original approach due small mistakes/misunderstandings. Currently, it is not possible to evaluate if the implementation of the authors matches the performance of the original code base because the authors do not report results with the augmentation mechanism being enabled. Hence, I again suggest that the authors also report results for the baselines with augmentation to demonstrate that their reimplementation is correct (in that case having a unified code base for all approaches could even be considered a strength of the paper). - The authors should make it more clear if they used identical test instances to earlier work or if they generated new test instances (I hope the former, because generating new test instances makes a comparison to earlier works unnecessarily difficult). [Hottung et al.] Hottung, André, Bhanu Bhandari, and Kevin Tierney. "Learning a latent search space for routing problems using variational autoencoders." International Conference on Learning Representations. 2020. Technical Quality: 3 good Clarity: 3 good Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - The paper does not discuss limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and positive feedback. We will update the paper accordingly and have added additional experiments to help address their concerns. > W1: The proposed method is very similar to CVAE-Opt [1]. The authors should make it clear in the paper and discuss the differences. We agree that the work of [1], CVAE-Opt, merits discussion, however, there are significant differences both algorithmically and in terms of empirical performance between this work and COMPASS. Specifically, whilst [1] also trains a neural network conditioned on a continuous latent space and explores it at inference time: - Our method is entirely trained with reinforcement learning and hence, unlike [1], does not require any direct supervision from pre-solved instances and is hence amenable to problems where no good prior solver is available. - CVAE-Opt has to additionally train a recurrent encoder of (instance, solution) pairs. By contrast, COMPASS considers the latent space to encode a distribution of complementary policies and can be easily applied to pre-trained models (e.g. POMO). - Our method significantly outperforms the work of [1] whilst also having a significantly shorter runtime, as reported in Table 1 of the pdf file attached (will be added to Appendix A.1.1). Whilst our initial intention was to focus our Related Work section on RL construction methods, we will update it to cover a larger set of methods and ensure that [1] is discussed in any revised manuscript. > W2: The paper omits discussing most of the existing works on learning extensive search methods (or improvement methods). [And statement L. 41/42 is misleading]. The sentence (L. 41/42) was referring to RL construction methods, however we accept that this was not sufficiently clear and will update the phrasing to make this explicit. Additionally, we will update our Related Work section to inform readers that there also exists a literature on improvement methods which propose approaches to exploit increased computation time. Finally, we will present the most relevant papers on improvement methods in a new section of the Appendix. Since the literature on construction methods is both extensive and more directly related to our work, we believe it is important to ensure this remains the key focus of the main manuscript (similarly to [3]) with extended context provided in the Appendices. > W3: The validity of the experimental results is limited by the fact that the authors (...) do not use instance augmentation. I (...) suggest also reporting results for the baselines with augmentation to demonstrate that their re-implementation is correct. We agree with the reviewer that reporting the results with problem augmentation is an important step to validate our implementations, and hence to strengthen our experimental results. Consequently, we have re-run the entire benchmark with instance augmentation. The results will be added to the Appendix and are reported in the file attached to our rebuttal. Concerning the runtime, they were already reported in Table 2 of Appendix A.1.1 but we will add them to Table 1 (main paper). With regards to the correctness of our implementations; the reported POMO results are run in-house and match, or outperform, the published results in [2] where available. We also independently verified that our re-implementation of EAS on TSP100 and CVRP100 with augmentations is consistent with the published results. We think that it is fair to run the benchmark without instance augmentation. As mentioned by the reviewer, those are domain specific tricks that cannot always be used (e.g. JSSP). It is hence crucial to develop approaches that do not rely on those for exploration. In addition, it is important to mention that the baselines still benefit from a strong exploration in our benchmark, as they all use the multiple starting positions introduced in [2]. Finally, we note the use of augmentations makes fair comparison to methods designed for inference-time search challenging. In the literature, 8 augmentations are typically used for each problem instance following the protocol of [2]; however this choice is arbitrary and in practice any number could be used. Larger numbers allow more deterministic samples from a specific policy, but therefore also leave less of the inference budget for subsequent steps with updated policies when using adaptive methods such as EAS and COMPASS. Indeed, in these cases the optimal number of augmentations would have to be tuned as an additional hyper-parameter of the search. That being said, we agree that for continuity with previous work, it is valuable to run and report those results, which we did for the rebuttal. Whilst the overall ranking of different methods is nearly always unchanged, we can still extract interesting observations from those results. In particular (i) COMPASS remains state-of-the-art on all instance sizes for TSP (ii) on CVRP, COMPASS makes better use of its budget by exploring its latent space rather than relying on the instance augmentation. In particular, COMPASS with no augmentation outperforms all other methods with augmentation on CVRP 100. > W4: The authors should make it clearer if they used identical test instances to earlier work. We reuse identical test instances as the methods we compare to. We will update the paper to make it clearer. > The paper does not discuss limitations. We discuss limitations of our method in the Appendix (A.14). We nevertheless agree that limitations should be mentioned in the main paper: we will add a paragraph in the conclusion. --- [1] Hottung et al., Learning a latent search space for routing problems using variational autoencoders. ICLR (2020). [2] Kwon et al., Pomo: Policy optimization with multiple optima for reinforcement learning. Neurips (2020). [3] Kim et al., Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization. Neurips (2022). --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank you for your response! > The sentence (L. 41/42) was referring to RL construction methods, however we accept that this was not sufficiently clear and will update the phrasing to make this explicit. Even for construction methods the statement is not completely true. For example, the recently proposed Poppy method can benefit from longer runtimes (e.g., 4 hours for 10,000 instances). Overall, I feel like you want to highlight a gap in the literature that is not really there. > Consequently, we have re-run the entire benchmark with instance augmentation. Thank you for conducting the additional experiments. For EAS you seem to only report the results from the EAS paper instead of those from your own JAX implementation. What is the reason for that? Overall, I am surprised by the long runtimes of EAS reported in the main paper. For CVRP100 EAS takes more than twice as long as POMO. In the EAS paper, EAS takes only 30% more time than POMO when sampling an identical number of instances. > The results will be added to the Appendix and are reported in the file attached to our rebuttal. Given that you get one additional page for the final version of the paper, I think it would be better to include results with and without augmentation in the main paper in one single table. This makes it easier for the reader to get a complete picture and to understand the impact of augmentation on the different methods. It is also more fair because results would then be reported for each method on their best setting (as pointed out by reviewer ubWG). > In the literature, 8 augmentations are typically used for each problem instance following the protocol of [2]; however this choice is arbitrary and in practice any number could be used. The literature usually uses 8 augmentations because this is the number of unique unit square transformations (see Table 1 in the POMO paper). While different augmentation techniques are possible, the established 8x time augmentation method is a very natural approach for 2d euclidean distance routing problems. --- Reply to Comment 1.1.1: Title: Additional comments [1/2] Comment: We thank the reviewer for their additional comments. > **Even for construction methods the statement is not completely true. For example, the recently proposed Poppy method can benefit from longer runtimes (e.g., 4 hours for 10,000 instances).** We agree that L. 41/42 lacks nuance and propose a rephrasing. Our aim was to underscore the challenges faced by existing methods as detailed in L. 28-40. In essence: 1. RL construction methods (e.g. POMO and Poppy) strive to enhance solution quality from a few rollout episodes, primarily using simple bulk stochastic sampling methods as a “search” heuristic during tests. 2. Search-based methods, (e.g. EAS), have practical challenges like inference-time training costs. 3. Existing approaches typically separate training the one-shot inference policy from formulating an effective search procedure. Notably, addressing this third point drives the motivation behind COMPASS. Given that the preceding paragraph already communicates these challenges, we find L. 41/42 redundant and subjective. We suggest its removal and merging L. 43-45 with the end of the previous section to read: "...rather current approaches typically completely decouple both. The absence of an efficient search strategy is even more detrimental when…". On the Reviewer's point about Poppy benefiting from extended runtimes, we agree it outperforms methods like POMO. This edge comes from its pre-training of a fixed but diverse set of policies. However, it still predominantly depends on direct stochastic sampling to improve on its few-shot performance. Notably, in our experiments, we've used Poppy as a benchmark and observed that COMPASS's inference-time adaptation offers marked enhancements in both performance and generalization. > **For EAS you seem to only report the results from the EAS paper instead of those from your own JAX implementation. What is the reason for that? Overall, I am surprised by the long runtimes of EAS reported in the main paper. For CVRP100 EAS takes more than twice as long as POMO. In the EAS paper, EAS takes only 30% more time than POMO when sampling an identical number of instances.** Due to the time constraints of the rebuttal, we relied on published results. However, to clarify concerns about the accuracy of our re-implementation, we replicated results on TSP100 and CVRP100. Our findings align with those from [1]: | Method | TSP100 | CVRP100 | |-------------|------|------| | EAS (paper) | 7.769 | 15.63 | | EAS (ours) | 7.768 | 15.62 | The timings in the rebuttal were also taken from [1]. Notably, our EAS execution (using both our codebase and hardware) is about 20% faster than [1]. For instance, on TSP100, our POMO and EAS runtimes are 2 and 4 hours respectively, while [1] reports 5 hours for EAS. Although this difference doesn't change the overall conclusion that COMPASS is significantly more efficient, we plan to update all benchmarks using our EAS version for better comparison. While our EAS runtimes differ more significantly from POMO than in [1], this doesn't indicate inefficiency. EAS has two main steps at test-time: (1) rolling out trajectories to sample solutions, and (2) backpropagation for network weight updates. POMO only involves step (1). We suspect the variation in relative speeds between these steps comes from our use of JAX, instead of PyTorch as in [1]. Given that both our network and environment are written in pure JAX, we can compile the entire rollout into a single optimized operation, making sampling trajectories (step 1) much faster. Overall, we emphasize that best-efforts were made to optimize the implementations of all considered methods (e.g. POMO, Poppy, EAS, COMPASS). Ultimately, our reported runtimes will come from a single codebase that will be open-sourced along with the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive comments, feedback and suggestions. In particular we are pleased to see the contributions of our work; both methodological and strong empirical performance, highlighted. We respond to each question and concern in detail for each reviewer independently. However, in general, the recurring themes were requests for (1) extended results and discussions of additional solving methods; either using instance augmentation common for TSP and CVRP (Qr8r, ubWG) or additional models such as CVAE-Opt, SGBS, DPDP (YBD4, ubWG, ubWG), and (2) clarification of technical details and timings (YBD4, Qr8r, ubWG, GNoz). We have provided additional results relating to point (1) in the attached pdf, namely Table 1. We find that the overall messages of the paper, such as COMPASS remaining the most performant RL method is unchanged. In particular, COMPASS significantly outperforms the additional baselines requested. For point (2) we have provided all additional information requested; with supporting figures in the attached pdf, clarifying our manuscript where details were missing, and including additional references to the relevant section of the Appendices. We thank the reviewers for raising these points as the additional results further strengthen our contributions. We believe that all raised points have been addressed and would be happy to discuss any remaining concerns that the reviewers may have. Pdf: /pdf/401bc550fa123a0303647691d7dca446c60a8ccb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Conditional score-based diffusion models for Bayesian inference in infinite dimensions
Accept (spotlight)
Summary: - The study proposes a method to learn the posterior distribution in infinite-dimensional Bayesian linear inverse problems using amortized conditional Score-based Diffusion Models (SDMs). This extends conditional SMDs into the infinite-dimensional function space setting, as existing conditional SDMs have previously only dealt with finite-dimensional vector spaces (noting also that _unconditional_ SDMs have recently been extended to infinite dimensional vector spaces by Pidstrigach et. al). This leads the way for applications in, for example PDEs, where the unknown parameters to be estimated take the form of functions. - The key technique underlying their approach is to define the _conditional_ score in an infinite dimensional setting, extending the method of Pidstrigach et. al, who defined the _unconditional_ score in the infinite dimensional setting. - Using their definition of the conditional score in infinite dimensions, this allows them to avoid having to solve a potentially expensive proximal optimization step, as was done by Pidstrigach et. al. - The authors then provide a comprehensive theoretical analysis of the use of their conditional score in SDMs and show: - How this newly defined score is used as a reverse drift of the diffusion process, which leads to a generative model that samples from the correct target conditional distribution under certain conditions. - That as long as you start from the invariant distribution of the diffusion process, the reverse SDE converges to the target distribution exponentially fast - By explicitly computing the expected square norm of the conditional score, they shows that a uniform in time estimate is not always true for the conditional score. This leads them to provide a set of conditions to be satisfied to ensure a uniform in time estimate for a general class of prior measures. - That the conditional score can be estimated via a conditional denoising score matching objective in infinite dimensions. - That the conditional denoising estimator is a consistent estimator of the conditional score in infinite dimensions. - That unlike the unconditional score, for noiseless observations the conditional score blows up as T->0 - They present a small toy experiment that validates their approach by demonstrating the applicability of their method in approximating non-Gaussian multi-modal distributions. Strengths: I will address strengths and weaknesses across the four dimensions (Originality, Quality, Clarity, Significance) below. Weaknesses: **Originality:** - Are the tasks or methods new? - Yes, the authors make a novel contribution to the quickly-growing diffusion model literature, by showing how to extend conditional SDMs into the infinite-dimensional vector space, thus opening the door to a wider range of applications (PDEs, etc) - Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? - The authors are very clear about: 1. How they are starting with the recently-proposed framework of Pidstrigach et al 2. The specific point where they deviate from, and then extend, Pidstrigach's work (Definition 2, eq. 8) - Further, they provide a comprehensive analysis of the use of their defn 2 in SDMs in sections 4 and 5. - However I should state that I am not adequately familiar with the mathematical techniques deployed in this paper to check their technical claims for correctness or novelty, so please differ to another reviewer with more expertise in this area. - Is related work adequately cited? - Yes. **Quality:** - Is the submission technically sound? - It appears to be, although as previously stated I am not sufficiently versed in their mathematical techniques to be 100% sure. - Are claims well supported (e.g., by theoretical analysis or experimental results)? - Theoretically yes, experimentally no. I realize this is a theory paper and don't necessarily expect full-scale experiments on PDEs, but in Section 6 I expected to see at least 2 obvious baselines which were foreshadowed in the body of the text but not experimentally validated on the toy experiments. These are: 1. How does their method compare to the crude, discretization-based approach discussed in lines 56 - 61: > A straightforward solution may be to discretize the infinite-dimensional input and output function spaces into finite-dimensional vectors, and apply SDMs to learn the posterior. Yet theoretical studies of current DMs suggest that performance guarantees do not generalize well on increasing dimensions [7, 9, 33]. This is precisely why Stuart’s guiding principle to study a Bayesian inverse problem for functions— “avoid discretization until the last possible moment” [41] — is more than ever critical to the use of SDMs. A primary motivation for their method is "Stuart's Principle", which says to avoid discretization until the last possible moment. I would have liked to see experimentally why this principle is so important on the simple toy examples they have provided. 2. How does their method compare to the approach proposed by Pidstrigach for conditional sampling, which uses a proximal optimization step? This is a second primary motivation for their approach (and appears in the abstract) - their method is more performant because it avoids solving an optimization problem at each timestep. As far as I can tell it has not been experimentally validated that their approach is faster or more performant than the baseline method of Pidstrigach. Evidence showing their approach either gets better samples, or get samples of the same quality but more efficiently, is needed, given that a primary motivation for their approach is that the baseline method of Pidstrigach may be too computationally costly because of their use of proximal optimization. - Are the methods used appropriate? - Yes - Is this a complete piece of work or work in progress? - Yes, modulo the missing baselines discussed above. I believe such baselines are needed to consider this a complete piece of work, given how "Stuart's Principle" and "avoid proximal optimization" play a key role in the storyline and motivation for their technique. - Are the authors careful and honest about evaluating both the strengths and weaknesses of their work? - Yes. In particular I found their observation, that unlike the unconditional score, for noiseless observations the conditional score blows up as t->0, particularly interesting. However in the conclusion I would have liked to read more about the authors' reflections on the strengths/weaknessness/future directions of their approach, both from a technical standpoint and from the standpoint of potential downstream applications of this work. **Clarity:** - Is the submission clearly written? - I found all writing up until section 4 relatively easy to follow. I began to get lost around sections 4 and 5 and couldn't follow the math, but I attribute this largely to not being comfortable with stochastic differential equations. I could still follow the high-level plot in these sections, but will defer to other reviews to evaluate the technical claims. - Is it well organized? - Yes - Does it adequately inform the reader? - Yes **Significance:** - Are the results important? - Yes, although I would have liked the authors to motivate the applications of their approach more thoroughly. I believe they only listed PDEs as an example for why you would want to use this approach but surely there are more applications than just PDEs, no? - Are others (researchers or practitioners) likely to use the ideas or build on them? - Yes - Does the submission address a difficult task in a better way than previous work? - Theoretically it appears so, but baselines are needed to experimentally validate these claims. - Does it advance the state of the art in a demonstrable way? - Theoretically it appears so, but baselines are needed to experimentally validate these claims. - Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach? - Yes, they build on the work of Pidstrigach in a novel way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are some other applications of this work besides PDEs? Can you include a few of them in the introduction so the reader doesn't think applications of your work is limited only to PDEs? - In the conclusion the authors say their method "is able to perform conditional sampling directly on infinite dimensional Hilbert spaces." Don't Gaussian processes also allow you to perform conditional sampling directly on infinite dimensional Hilbert spaces? What are the differences between your approach and Gaussian processes? When would I want to use one vs. the other? Could a GP be used as a baseline in your Figure 1? If so it would be very interesting to see how it compares. - The authors also say "We also show that the conditional score can have a singular behavior at small times when the observations are noiseless, in contrast with the unconditional score under similar hypotheses.". Is this just a limiting phenomenon of theoretical interest, or do you expect this to cause difficulties in practice? Are there any applications where we should expect observations to be noiseless or does that assumption never hold in practice? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - There does not appear to be a "Limitations and Broader Impacts" statement in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for their feedback. We are happy to clarify our manuscript in response to the Reviewer's questions. We hope that this could lead to an improvement in their assessment of the paper. 1. **Discretization-based approaches and Pidstrigach's procedure.** While it would have been interesting to extend the section dedicated to numerical experiments with explicit comparison with other methods, we want to point out that the limitations of crude discretization-based approaches are well-documented in the literature on Bayesian inverse problems. Consequently, we made a deliberate decision to avoid focusing on them experimentally. Moreover, in the literature on score-based diffusion models (SDMs), there are already instances where results show degradation when generalizing to infinity [1, Figure 5]. Regarding the comparison between our method and the one by Pidstrigach [2], we want to stress that our method is not necessarily more performant$-$that was never our claim. It really depends on the task to be solved. Our aim was to frame the comparison between the two methods through the lens of the more general comparison between case-specific inference and amortized inference. Amortized methods *can* be a preferred option in Bayesian inverse problems. In our paper, we cite some works where these examples are provided (line 119). Given that there is abundant literature on both topics---the importance of discretizing as late as possible and amortized methods---we believe that it is more interesting, since this is a theory paper, to focus on the fact that while Pidstrigach's method for conditional sampling is heuristic, ours *provably* samples from the posterior and is discretization-invariant. Our main contribution is to offer theoretical guarantees for sampling from the posterior, with the additional appeal of providing a method for practitioners seeking discretization-invariant amortized DMs. In this sense, it is important to note that our method stands in comparison to Pidstrigach's one. In fact, their implementation is essentially finite. Pidstrigach use a UNet to parametrize their score [2, Section 6], which restricts its evaluation to the training interval. If dealing with a specific inverse problem on a new grid, they would need to gather new training data, retrain the NN, and limit the use of the score to that specific grid. In contrast, our approach allows us to move away from the initial grid, effectively taking advantage of the discretization-invariance property. This flexibility ensures a broader applicability of our method. Your comment helped us recognizing that we haven't stressed this difference enough, so we will add a remark and a new numerical experiment (see Figure 1 in the PDF included in our global response). Finally, we agree with your remark that our toy example needs some improvement to better fit the storyline. In the PDF we included preliminary results, namely Figure 1, illustrating that the method can handle different discretizations. This serves to underscore the importance of Stuart’s principle. Additionally, we incorporated an experiment on a large-scale inverse problem in geophysics, specifically linearized seismic imaging via the Born approximation with a $256 \times 256$-dimensional unknown parameter, to enhance the comprehensiveness of our results (see Figure 2). We will add these experiments in the paper. 2. **Concluding remarks.** Given the theoretical nature of the paper, summarizing a series of details from the proofs and presenting them in a self-contained section is challenging. Nonetheless, we recognize the value in discussing the strengths, weaknesses, and future directions of our method. Therefore, we intend to address this aspect by adding a few remarks throughout the text incorporating, among others, Reviewer hg8t's second question. 3. **Applications beyond PDEs.** While applications of our work to PDE-based inverse problems are the most natural ones, they are by no means the only ones. In the non-PDE class we may think at geometric inverse problems (e.g. how to determine the Riemann metric from geodesic information, or the background velocity map from travel time information in geophysics) [3] or inverse problems for singular integral operators [4]. We will cite these examples in our paper. 4. **Can a GP be used as a baseline in Figure 1?** In the Gaussian framework of Section 4 this would make sense but in the general framework of Sections 5-6 this does not seem possible. It is clear that the conditional distribution in the example of Section 6 is strongly bimodal so GP regression does not seem appropriate to address this example. 5. **Singularity of the conditional score.** The singularity in the conditional score as noise vanishes has been investigated in finite dimensions. It is not merely a theoretical phenomenon but has practical implications. For a discussion about the difficulties this singularity causes and numerical methods for approximating the score at small times in finite dimensions, we refer to [5, 6]. As for our paper, we anticipate that Assumption 1 is reasonably easy to satisfy. However, we must be mindful of the possibility of encountering a blow-up of the score under the conditions described by Reviewer hg8t. To address this concern, we will add a remark. **References** [1] A. Phillips et al, *Spectral diffusion processes*, 2022. [2] J. Pidstrigach et al, *Infinite-dimensional diffusion models for function spaces*, 2023. [3] G. Uhlmann and A. Vasy, *The inverse problem for the local geodesic ray transform*, 2016. [4] A. Dynin, *Inversion problem for singular integral operators:* $C^*$*-approach*, 1978. [5] D. Kim et al, *Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation*, 2022. [6] T. Dockhorn et al, *Score-Based Generative Modeling with Critically-Damped Langevin Diffusion*, 2022. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your response, which I found convincing. Given the new (and impressive!) experiments, I've happily increased my score from 6->7.
Summary: This paper mathematically examines linear inverse problems in infinite dimensional vector spaces. Particularly, it is proved that the conditional denoising estimator is a consistent estimator of the conditional score in infinite dimension. Strengths: The consistency of the conditional denoising estimator in infinite dimensional vector spaces is mathematically shown. For a specific case of Gaussian prior, the forward-reverse SDEs are solved exactly, which shows an exponentially fast convergence in the reverse SDE. A sufficient condition for the success of the score-based diffusion model framework is presented for the infinite dimensional version. Weaknesses: The considered inverse problems are not of the plug-and-play type, which would limits the practical utility. Numerical example is limited to a one-dimensional toy model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Gaussian process (GP) is a popular method for inverse problem in Hilbert space. I wonder if the current problem would have a certain connection to GP, in particular, in the case of Gaussian prior discussed in section 4. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Descriptions on the motivation to examine the inverse problem in Hilbert space are lacking. In what practical situations do inverse problems in Hilbert space come out? For instance, GP is widely used for Bayes optimization, which is an optimization scheme for black-box functions. Additional writing about possible applications of the inverse problem in Hilbert space would make the paper more attractive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for their positive feedback. We are happy to clarify our manuscript in response to the Reviewer's remarks and questions. 1. **Practical utility against plug-and-play-type approach.** While plug-and-play methods are indeed very popular, we would like to emphasize that we deliberately chose not to focus on them, first and foremost because we wanted to provide theoretically-grounded guarantees for an approach that is not heuristic. Furthermore, it is worth noticing that current implementation of plug-and-play approaches using diffusion models for conditional sampling in the infinite-dimensional setting [1, Section 6] does not fully exploit the ``discretization-invariance'' property achieved by studying the problem in infinite dimensions. Pidstrigach employ a UNet to parametrize their score, which restricts the evaluation of their score function to the training interval. Consequently, when dealing with a specific inverse problem on a new grid, they would need to gather new training data on that grid, retrain the neural network, and consequently limit the use of the score to that specific grid. In contrast, our implementation allows us to handle new discretizations without requiring additional training data, as demonstrated in the new experiment that we intend to include in the final version of the paper (see Figure 1 in the attached PDF for preliminary results). Therefore, we believe that our approach is not only theoretically grounded but also offers a broader applicability for our method. 2. **Numerical example is limited to a one-dimensional toy model.** We acknowledge your remark, along with those of the other reviewers, regarding the need for improvement in our toy example. In our global response, we have included a PDF that contains additional experiments, illustrating the applicability of our method to a large-scale inverse problem in geophysics, specifically linearized sesmic imaging via the Born approximation (Figure 2 in the attached PDF) with a $256 \times 256$-dimensional unknown parameter. We will add these experiments in the paper. 3. **Connection to Gaussian Process.** Gaussian process regression is indeed related to the calculations presented in section 4 which is in the Gaussian framework. GP has been used to solve inverse problems because of - at least - two fundamental reasons. First it naturally arises when data is contaminated by noise, and additive Gaussian noise is the most simple noise model. Second it allows the use of simple but powerful theorems (such as Gaussian conditioning theorem). Moreover, in the infinite-dimensional setting, the GP approach is possible via the use of Gaussian measures (and appropriate theorems such as Feldman-Hajek theorem). In our paper we leverage these aspects of Gaussian analysis to gain novel and fundamental insights about the conditional score. Moreover, we emphasize the robustness of these insights by introducing a prior that is absolutely continuous with respect to Gaussian measure. 4. **Possible applications of inverse problems in Hilbert spaces.** Bayesian approach to inverse problems makes it possible to deal with under-determined and/or noisy inverse problems by an appropriate prior modelling. In the infinite-dimensional PDE context, this prior should sample functions in the suitable space functions, which are very often Hilbert spaces. For instance, the inverse heat equation (how to determine the initial condition of a heat or convection-diffusion equation from noisy measurements) or the elliptic inverse problem (how to determine the source of an elliptic equation from noisy measurements) presented in [2] are concrete examples of noisy and under-determined inverse problems and they are naturally formulated in Hilbert spaces. We will mention these examples in the paper. **References** [1] J. Pidstrigach, Y. Marzouk, S. Reich, and S. Wang, *Infinite-dimensional diffusion models for function spaces*, arXiv:2302.10130. [2] M. Dashti and A. M. Stuart, *The Bayesian approach to inverse problems*, in Handbook of Uncertainty Quantification, Springer, 2017, pp. 311-428. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I keep my evaluation as it is.
Summary: This paper proposed a method to deal with inverse problems in infinite dimensions using conditional-score-based models. Specifically, they propose to directly learn the posterior distribution in infinite-dimensional Bayesian linear inverse problems using amortized conditional SDMs. Moreover, this paper also discussed the robustness of the learned distribution against perturbations of the observations. A numerical experiment is conducted to validate the efficiency. Strengths: 1. This paper proposed an interesting method to deal with infinite-dimensional Bayesian linear inverse problems. 2. It provides a detailed analysis of the forward-reverse conditional SDE framework in the case of a Gaussian prior measure. 3. It provides a set of conditions to ensure a uniform in-time estimate for a general class of prior measures. Weaknesses: 1. Regarding the introduced definition of the conditional score and the result that the conditional score can be estimated via a conditional denoising score matching, it seems that they are straightforward extensions of the unconditional case. For score-based models, the key is to learn a general distribution using score matching, regardless of whether it is a conditional distribution or a non-conditional distribution. I mean, there is actually no fundamental difference between a conditional distribution and non-conditional distribution, i.e., score-based matching applies to them or other general distributions equally. 2. From my understanding, and also as suggested in the abstract, this paper only focuses on infinite-dimensional Bayesian linear inverse problems, rather than arbitrary inverse problems. The title of this paper is kind of misleading. On the other hand, for linear inverse problems, the score of the likelihood term is relatively easy to obtain, compared to directly training a conditional score network. Please correct me if I am wrong. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some additional questions: 1. What is the difference between (2) and (3)? 2. The experiment part only considers a simple low-dimensional problem, is it possible to add some results of large-scale problems? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their feedback. We are happy to clarify our manuscript in response to the Reviewer's questions. We hope that this could lead to an increase in their score. 1. **Conditional SDMs.** Various approaches have been proposed in the literature for dealing with conditioning, both in finite and infinite-dimensional cases. In [1], an effective approach involves subspace projection during time-reversed diffusion. The subsequent noise-level-dependent update yields favorable application outcomes. In [2] three methods are discussed from theoretical and practical viewpoints in finite dimensions. It is indeed not obvious what is the best way to enforce the constraint associated with observations and what the role of the noise level is, and how to incorporate the constraints from a computational perspective. These challenges are particularly acute in the infinite-dimensional case (see next answers). Here we remark that we agree that a Bayesian and a conditional score perspective is a natural approach. It is important however to understand how the conditioning affects the score and the training and the role of noise. Here we have shed light on this important challenge by considering the Gaussian context discussed in Section 4 where we get explicit insight about the conditional score. This reveals in particular the singular behavior that one may observe for the time reversed diffusion in case of low noise observations. Moreover, we generalized the results in [2] regarding a conditional denoising estimator in infinite dimensions. This result is important for efficient training and provides a theoretical underpinning for efficient conditional generation and further research. Indeed in the context of inverse problems efficient implementation of the conditioning is the central challenge. 2. **Title.** Regarding the potentially misleading title, we don't know if it can be modified at later stages. While we agree that we could have been more vocal in the title about the specific problems we are addressing, we remark that the vast majority of papers concerning SDMs and inverse problems (IPs) focus on linear IPs. Addressing nonlinear IPs is a challenging task, and the limited subset of papers that delve into more general cases typically highlight this in their titles from the outset. However, we will make a concerted effort to emphasize in the abstract and throughout the text that we are exclusively considering linear IPs. 3. **Score of the likelihood.** Contrarily to methods that incorporate the gradient of the log-likelihood in order to sample from the posterior, ours does not assume the knowledge of the forward model, as only data pairs are used to learn the score. We remark that, while using the log-likelihood in SDMs linear IPs looks straightforward, it becomes in fact analytically intractable in terms of DMs, due to their dependence on time. That's why existing approaches resort to projections onto the measurement subspace [3, Section 5.4]. However, we remark that Pidstrigach's implementation [3, Section 6] utilizes UNet to parameterize the score, limiting the evaluation of the score function to the training interval only. In contrast, our method is not limited to the grid on which we initially train our network. In other words, we truly incorporate the advantages of the infinite-dimensional approach. Additionally, we remark that the projection-type methods are primarily heuristic and suffer from instability when dealing with ill-posed IPs. Recently, some workarounds have been proposed to tackle these issues in finite dimensions [4, 5], but we remark that our method, together with not assuming any knowledge of the forward model and operating directly in infinite dimensions, is also designed to offer an alternative to data-specific inference. In fact, projection-type methods can involve costly forward operator computations. Our method, instead, learns an amortized version of the conditional score and, by doing so, addresses a critical gap in the literature involving infinite-dimensional SDMs. There exist cases in which amortized methods can be a preferred option for Bayesian IPs (see line 119 and accompanying references in our paper). 3. **Difference between (2) and (3).** In finite dimensions, the drift of the reverse stochastic differential equation (SDE) involves the score function $\nabla \log p_t$, where $p_t$ represents a density with respect to a Lebesgue measure. However, in infinite-dimensional Hilbert spaces, this density is no longer well-defined (the Heine-Borel theorem does not hold in such spaces). As a result, the left-hand side of equation (2) cannot be interpreted literally in infinite dimensions. In equation (3), $S(t,x)$ is thus defined formally, leveraging the fact that the right-hand side of equation (2) is well-defined in infinite dimensions. This implies that we need to demonstrate that $S(t,x)$ truly represents the score function, i.e., the drift of the reverse SDE. Analyzing the forward-reverse conditional SDE framework is an important contribution of our paper. 4. **Is it possible to add results on large-scale problems?** Yes, see the PDF included in our global response. Figure 2 illustrates a new experiment, which we will include in the appendix, demonstrating the applicability of our method to a large-scale inverse problem in geophysics, i.e. linearized seismic imaging via the Born approximation with a $256 \times 256$-dimensional unknown parameter. **References** [1] Y. Song et al, *Solving inverse problems in medical imaging with score-based generative models*, 2022. [2] G. Batzolis et al, *Conditional image generation with score-based diffusion models*, 2021. [3] J. Pidstrigach et al, *Infinite-dimensional diffusion models for function spaces*, 2023. [4] H. Chung et al, *Diffusion posterior sampling for general noisy inverse problems*, 2023. [5] Y. Wang et al, *Zero-shot image restoration using denoising diffusion null-space model*, 2022. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Thank the authors' detailed responses, and I increased my score accordingly.
Summary: Score-based diffusion models are successful in solving inverse problems in a finite-dimensional setting, but infinite-dimensional diffusion models needs to be constructed with care, as the definitions for Lebesgue measures and densities become less clear. The authors extends the work of Pidstrigach et al. [33] of unconditional score matching in infinite dimensions to a conditional setting. Strengths: I recommend a strong accept for the paper because of its theoretical soundness and its approachable presentation in its explanation. The paper presents a theoretically elegant and principled approach to solving inverse problems in infinite dimensions, as it is guided by Stuart’s principle and does not involve projection to finite vector spaces and discretization when unnecessary. The paper also analyzes a general scenario with prior distributions absolutely continuous w.r.t. the Gaussian measure, and propose analogous results to Pidstrigach et al. [33]. Weaknesses: I cannot identify a specific point of weakness that has to be addressed. One can argue against its simple experiment but I see a proof-of-concept experiment sufficient for this paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have a few questions regarding the general aspects of Bayesian inference in infinite dimensions, as I am not familiar with the exact formulation. - Observational model in the paper occupies a finite-dimensional subspace. Is there a scenario where one cannot find an orthonormal basis such that the observation $y$ only spans a finite subspace? - While I understand Assumption 1 _can_ be satisfied under certain conditions given by the prior measure's Radon-Nikodym derivatives, but does this assumption suffer from finite training data? If we think about extreme settings with only a few training data, the score matching essentially tries to memorize these noiseless data points, causing an automatic violation of the assumption. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - Line 15 typo: it should read "extension of ... to the *conditional* setting". - Minor citation error on Line 72: The seminal score matching paper [17] has Hyvärinen as the sole author. - Line 293 typo: “… proposition _on_ such set of conditions” Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for their positive feedback. We are happy to clarify our manuscript in response to the Reviewer's questions. 1. **Finite-dimensional observational model.** Indeed, if the number of observations is finite, we may think that the observations only span a finite subspace. There are, however, instances where considering infinite-dimensional measurements can prove advantageous. This is particularly relevant when aiming to demonstrate the robustness of theoretical results, such as the asymptotic behavior of sample errors, regardless of the method used to discretize measurements. Such situations often arise when dealing with data being functions observed on a dense array. Andrew Stuart provides an example in [1, Section 3.5]: the inverse problem is determining the initial condition for the heat equation, given noisy observation of the solution at a positive time. 2. **About Assumption 1.** This is a good remark. Indeed we may deal with a blow up of the score under such conditions and the results in Section 4 make this explicit in the case with a Gaussian prior. However, this situation should only occur with noiseless data. As soon as data are noisy, we believe Assumption 1 should be reasonably easy to satisfy. 3. **Typos.** Thank you for noticing the typos. We will fix them. **References** [1] A. Stuart, *Inverse problems: a Bayesian perspective*, Acta Numerica 19 (2010), 451-559. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: I thank the authors for addressing the points I laid out in the review, and maintain the same score assessment for this paper.
Rebuttal 1: Rebuttal: We thank the Reviewers for their valuable and constructive feedback. Based on your comments, we have taken significant steps to enhance our Section 6. We are now including additional experiments that demonstrate the applicability of our method to large-scale problems and showcase its discretization invariance. **Preliminary results can be found in the attached PDF, which includes relevant figures for your reference.** In accordance with your suggestions, we have outlined the following improvements: 1. **First Experiment.** Taking advantage of the additional content page for the camera-ready version, if our paper is accepted we will extend Section 6 by incorporating an experiment demonstrating the discretization invariance of our proposed method by sampling a bi-modal non-Gaussian conditional distribution over grids with varying discretizations. In particular, we present results related to sampling the posterior over various discretizations of the $[-3, 3]$ domain. Figure 1 in the attached PDF showcases the outcomes of this experiment by displaying the samples and the marginal conditional distributions at $y=-1, 0, 0.5$ (we used the relation $X_0= a y^2 + \epsilon$). We enhanced robustness against different discretizations by training the score-based model on data residing on nonuniform grids containing $15$ to $50$ grid points. No changes were made to other hyperparameters compared to the original experiment. 2. **Second Experiment.** We will add a new experiment in the appendix to demonstrate the applicability of our method to a large-scale inverse problem in geophysics, specifically linearized seismic imaging via the Born approximation (Figure 2 in the PDF). More specifically, the problem we address involves estimating the short-wavelength component of the Earth's unknown subsurface squared-slowness model (refer to Figure 2a in the uploaded PDF), using measurements collected at the surface. This inverse problem, known as *seismic imaging*, can be formulated as a linear inverse problem by linearizing the nonlinear relationship between recorded data and the squared-slowness model, as governed by the wave equation. In its simplest acoustic form, the linearization around a background squared slowness model (illustrated in Figure 2b of the uploaded PDF) leads to an inverse problem for estimating the true seismic image (depicted in Figure 2a). Given the high dimensionality of the observed data, we summarize it by projecting the data back into seismic image space using the adjoint Born operator, as shown in Figure 2c. This process leads to the reverse-time migrated image, which we utilize instead of data to train our conditional score-based model. We carried out the training for $300$ epochs using $4750$ pairs of true seismic images and associated reverse-time migrated images from the 3D Parihaka real dataset, with each image being of size $256 \times 256$. We used a batch size of $128$ and an initial learning rate of $2 \times 10^{-3}$, which decays to $5 \times 10^{-4}$ over the epochs following a power-law rate of $-1/3$. Post-training, for a new seismic image (refer to Figure 2a in the uploaded PDF), we simulate seismic data and compute the reverse-time migrated image using the adjoint Born operator. The trained conditional score-based model is then employed to sample from the posterior distribution of the seismic image, given the reverse-time migrated image. We draw $1000$ samples to compute the conditional (posterior) mean estimator, visualized in Figure 2c of the PDF. These samples are also utilized to calculate pointwise standard deviations (Figure 2d) as a measure of uncertainty. As anticipated, the pointwise standard deviation highlights areas of high uncertainty, particularly in regions with complex geological structures$-$such as near intricate reflectors and areas with limited illumination (deep and close to boundaries). The regions of significant uncertainty correspond well with challenging-to-image sections of the model. This observation becomes more apparent in Figures 2h and 2i, displaying two vertical profiles with $99$\% confidence intervals (depicted as orange-colored shading), which demonstrate the expected trend of increased uncertainty with depth. Furthermore, we notice that the ground truth (indicated by dashed black lines) largely falls within the confidence intervals for most areas. We also observe a strong correlation between the pointwise standard deviation and the error in the conditional mean estimate (Figure 2e), confirming the accuracy of our Bayesian inference method. To prevent bias from strong amplitudes in the estimated image, we present the normalized pointwise standard deviation divided by the envelope of the conditional mean in Figure 2f. This visualization provides an amplitude-independent assessment of uncertainty, highlighting regions of high uncertainty at the onset and offset of reflectors (both shallow and deeper sections). Additionally, the normalized pointwise standard deviation underscores uncertainty in areas of the image where there are discontinuities in the reflectors (indicated by black arrows), potentially indicating the presence of faults. With these updates and the additional improvements we are implementing based on our responses in the rebuttal, we sincerely hope to have addressed the concerns raised by the Reviewers. If our responses satisfied your concerns, we kindly ask that you consider revisiting your score accordingly. Please let us know if there is anything else that we can do or clarify to enhance the quality of this paper. Once again, thank you for your feedback. Pdf: /pdf/1ff2d59439d91edacfed963aa7bd416bf0d04ec7.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The author extends score based diffusion from finite dimensional processes to separable Hilbert space processes. They demonstrate that on a non-linear toy data set that the method can work. Strengths: The paper reads very well and is easy to follow. It is important to study what happens in general separable Hilbert spaces as many algorithms and models breaks down in the infinite dimensional setting. And even though you always do real application in the finite dimensional setting it is still important because if the method works in a separable Hilbert space setting it will not break down when you increase the precision your finite dimensional discretization. Weaknesses: The numerical experiment is not very convincing. The results for this toy problem are not very impressive. I suspect that if one runs multiple parallel version of a Crank–Nicolson algorithm one would get better result. It also would be interesting to see how the numerical method scale in practice when you increase the resolution of the grid. Technical Quality: 3 good Clarity: 3 good Questions for Authors: -When you in section 4. let $(Af)_i=(v_k,f)$ where $v_k$ is a eigenvector, is it not very obvious what you get? I mean the problem is a infinite series of independent processes and the data don't couple the processes. Does it not mean that you are back to the finite dimensional case (since if $v_j \notin A$ then for that $j$ the processes is equivalent to the prior? -In the application setting I don't get the prior on $X_0$ and when I look in the code it looks one has used $x_0$ as just a line? - What is the operator $C$ in this example? very very minor: Do you really have to write infinite dimensional in each sentence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for their critical feedback. We are happy to clarify our manuscript in response to the Reviewer's questions. 1. **Numerical experiment not very convincing.** We acknowledge your remark, along with those of the other reviewers, regarding the need for improvement in our toy example. In our global response, we have included a PDF that contains additional experiments that will be incorporated in the paper. We have addressed your specific concern by incorporating results indicating the discretization invariance of our method by sampling the toy conditional distribution over grids with varying discretization. These preliminary results anticipate that we are able to sample the bi-modal non-Gaussian conditional distribution (Figure 1 in the attached PDF). Furthermore, we have included a large-dimensional example that involves learning an amortized approximation to the posterior distribution of a wave-equation-based inverse problem over a $256 \times 256$ dimensional unknown, illustrating the applicability of our method to large-scale problems (Figure 2 in the PDF). We sincerely hope that these updates could lead to an improvement in your assessment of the paper. 2. **When in Section 4 you let $(Af)_i = (v_k,f)$ where $v_k$ is a eigenvector, is it not very obvious what you get?** The Gaussian setting of Section 4 makes it possible to carry out explicit and detailed calculations, because it is indeed possible to work $j$ by $j$. The main purpose of Section 4 is to show that the extension of the score-based diffusion models to the conditional setting is not trivial, but possible, in the infinite-dimensional setting. As a byproduct, it also shows that this extension in the finite-dimensional setting also requires some conditions because the blow-up of the score that we exhibit also holds in the finite-dimensional setting with noiseless data. So even the finite-dimensional setting is not obvious~! The singularity in the conditional score as noise vanishes is indeed a well-known phenomenon in the finite-dimensional setting. It has important implications. For a discussion about the difficulties this singularity causes in practice and numerical methods for approximating the score at small times in score-based models in the finite dimensional case, we refer to [1, 2]. As for our paper, we anticipate that Assumption 1 is reasonably easy to satisfy. However, we must be mindful of the possibility of encountering a blow-up of the score under the conditions described by Reviewer hg8t. To address this concern, we will add a remark in our paper. 3. **$C$ and prior on $x_0$ in the toy example.** Incorporating the suggestions we have received, we have made the decision to extend and improve Section 6. Consequently, we will be updating our implementation. The revised Section 6 will showcase an extended numerical experiment, demonstrating not only the success of our method in approximating a bi-modal non-Gaussian conditional distribution but also its discretization-invariance in practice as the grid resolution is varied (Figure 1 in the PDF included in the global response). Moreover, we will include a new experiment in the appendix, demonstrating the applicability of our method to a large-scale inverse problem in geophysics, specifically linearized seismic imaging via the Born approximation (Figure 2 in the PDF), that involves estimating a $256 \times 256$-dimensional unknown parameter. To address your other questions, in the toy example currently provided in the paper, we do not employ any explicit prior information, and in our amortized variational inference approach the prior is implicitly learned during training using the provided dataset of joint samples. We acknowledge that the notation used in Section 6 might have been misleading, as our focus in the current example in the paper is on learning the conditional distribution of $y$ given $x_0$, while in the rest of the paper it was the opposite (we fixed the notation in the PDF included in the global response). As for the operator $C$, we are utilizing the finite-dimensional approximation outlined in [3, Appendix G]. Specifically, we employ [4, Equation 11] as the projected equation. 3. **Too many "infinite-dimensional" in the paper.** Thank you for the feedback. We will work on reducing the redundancy in the writing. **References** [1] D. Kim, S. Shin, K. Song, W. Kang, and I.-C. Moon, *Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation*, 2022. [2] T. Dockhorn, A. Vahdat, and K. Kreis, *Score-Based Generative Modeling with Critically-Damped Langevin Diffusion*, 2022. [3] J. Pidstrigach, Y. Marzouk, S. Reich, and S. Wang, *Infinite-dimensional diffusion models for function spaces*, arXiv:2302.10130. [4] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, *Score-based generative modeling through stochastic differential equations*, ICLR 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. You have clarified several things for me. I will consider raising my score.
null
null
null
null
null
null
Higher-Order Uncoupled Dynamics Do Not Lead to Nash Equilibrium - Except When They Do
Accept (poster)
Summary: The paper studies higher order dynamics, i.e., dynamics that can rely on more auxiliary states than those limited by the dimensionality of the action spaces, in network games with pairwise interactions between players. Importantly, these dynamics only depend on the sequence of payoff signals that each player receives and are, thus, called uncoupled (an important property in multi-agent settings). The paper shows that linear versions of such dynamics can locally lead to any isolated mixed Nash equilibrium. However, for each such dynamic, there exists a "simple" anticoordination game for which this dynamic will not work, i.e., will not stabilize the unique isolated mixed NE. Furthermore, as the paper explains, a linear dynamic that converges to a mixed NE may not be meaningful after all. **Post-rebuttal**: After reading the other reviews and the authors' responses, I conclude that my concerns stem from my limited understanding of the techniques that are used in the paper. In response, I increase my score from 3 to 5 (and the contribution subscore from 2 to 3) to reflect my evaluation of the results, but I decrease my confidence from 3 to 2 to reflect that I didn't not understand (parts of) the techniques used in the paper. Strengths: - The paper provides existence results for uncoupled dynamics that (locally) converge to mixed Nash equilibria. To do so, the paper creates higher-order dynamics that overcome known limitations of lower-order dynamics, i.e., of dynamics whose dimension is constrained by the dimension of the action spaces. - The main takeaways of the paper are clearly presented. Weaknesses: - The paper is hihgly not self-contained. There is a strong reliance to prior literature and to the appendix. - As a result of the above and of the complicated notation, the paper was very hard for me to read. Although, I couldn't follow some parts and I couldn't verify the derivation of some results, the results seems plausible and the main takeaways are still clear (as mentioned above). - I found the motivation of main dynamics in line 140 (paragraph 3.2) inadequate - but this may be related to the fact, that the exposition was not good enough for me to be able to follow. - While the results inform the discussion on paper [16], they are merely existential and quite general. So, their practical and theoretical scope may be rather limited. - The paper could have done a better job in building upon more recent literature regarding convergence of dynamics to NE in games. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can the authors address the weaknesses mentioned above? - Line 22: not the most updated list of papers. Here is an indicative reference to help the authors locate more recent papers in the area in my opinion: https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Paper.pdf - Line 77: P_i(x_-i) and P(x_-i): I don't understand the subscript i in P_i. Also, the next sentence in lines 77-78 is even more confusing. - Line 80: is "a" tuple (and other such minor typos - the paper needs a proofreading) - Equations (4), (5) and (7): I had a hard time to follow the derivation of these equations. This is one instance of my comment above that the paper is highly not self-contained. - Line 160: wasn't y_i defined above as \dot v_i? Is that the same? - Line 171: stabilizability and detectability seem to be important but are only defined in the Appendix. I think that the paper needs to be rewritten in a way that it is self-contained and easier to follow. - Line 241: locally exponentially stable - the same here. This notion has not been defined before. - Line 288-289: I missed this argument. Is it that the dynamics fail to monotonically improve with respect to input payoffs (line 282)? Again, I couldn't understand the argument without relying on the Appendix. - Lines 297/304: anticipatory higher-oder learning/passivity, contractive games - another series of terms that are used without having being defined before. At least the conclusions should be accessible by a wider audience, but this is not the case. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question>> Can the authors address the weaknesses mentioned above? See below for an item-by-item discussion. Question>> Line 22: not the most updated list of papers. The list of papers, while admittedly brief, is representative and covers the qualitative aspects of convergence or non-convergence. We are glad to visit the suggested reference and papers cited therein. The suggested reference is a nice complement to the present paper in that it contrasts what can be concluded with and without higher-order dynamics. Question>> Line 77: I don't understand the subscript in P_i. Also, the next sentence in lines 77-78 is even more confusing. The subscript $i$ indicates that $P_i$ is the payoff vector of player $i$ which as mentioned has the same dimension as the strategy vector $x_i$. The subscript was mistakenly dropped in line 77. The sentence in line 78 suggests viewing each entry of $P_i$ as a payoff associated to a certain strategy of player $i$. Question>> Line 80: is "a" tuple (and other such minor typos- A final version will take care of all the minor editorial issues Question>> Equations (4), (5), and (7): I had a hard time to follow the derivation of these equations. As strategies evolve on the simplex, it is beneficial to project dynamics to their natural dimension (e.g., the simplex in R^3 is two-dimensional). The variable $w_i$ represents movement on this low-dimensional subspace. A revision can reinforce this interpretation. Question>> Line 160: wasn't y_i defined above as \dot v_i? Is that the same? The specific linear dynamics that ensure compliance with Assumption 2.1 have an internal state denoted by $v_i$ and the equation $\dot{v_i} = ...... $ describes the evolution of the state $v_i$. The variable $y_i$ is the output of this specific linear system. Question>> Line 171: stabilizability and detectability seem to be important but are only defined in the Appendix. Without going into the definitions, an implication of stabilizability and detectability is the existence of *coupled* learning dynamics that lead to Nash Equilibrium. This is a clear prerequisite to the existence of uncouple learning dynamics. A revision can reinforce this point. Regarding the paper being self-contained, the paper uses both basic concepts from control theory as well as less widely utilized ones (decentralized stabilization and strong stabilization). Accordingly, we believe that a peripheral value of the paper is bringing these concepts to the forefront in multi-agent learning. Question>> Line 241: locally exponentially stable The notion is a basic one in the study of dynamical systems. A revision can add a reference. Question>> Line 288-289: Is it that the dynamics fail to monotonically improve with respect to input payoffs (line 282)? The issue is that the dynamics fail to converge (monotonically or not) to a best response to *constant* payoff vector. A revision can better explain this point. Question>> Lines 297/304: At least the conclusions should be accessible by a wider audience, but this is not the case. There are two parts to this paragraph. The first is that there is a discussion to be had on what constitutes “natural”. This is stated plainly. The second part, which is less accessible, points to candidate notions with references, some more familiar than others (e.g., no regret vs passivity). A revision can try to broaden the accessibility Weakness>> The paper is highly not self-contained. There is a strong reliance to prior literature and to the appendix. We are doing our best work within the page limitations to bring into the paper what is essential to the main contributions and refer to prior literature and appendix for background and support. The current approach maintains the flow of the paper while providing a thorough overview in the appendix to help the reader navigate the paper. Weakness>> As a result of the above and of the complicated notation, the paper was very hard for me to read. Although, I couldn't follow some parts and I couldn't verify the derivation of some results, the results seems plausible and the main takeaways are still clear We are encouraged to hear that the main takeaways are clear. As mentioned previously, a peripheral value of the paper is that it brings in new tools to multi-agent learning, and we trying to work with the page restrictions accordingly. Weakness>> I found the motivation of main dynamics in line 140 inadequate The main dynamics are a special case of the general form in Section 2.3 where (i) the baseline learning rule is gradient-play and (ii) the higher-order modification is restricted to be linear. The payoff vector is processed through specific linear dynamics (call it a preprocessing phase) to ensure compliance with assumption 2.1, and then the output $y_i$ of the preprocessing procedure is used in the general decision dynamics (which are also linear). We can reinforce this interpretation in a revision. Weakness>> While the results inform the discussion on paper [16], they are merely existential and quite general. So, their practical and theoretical scope may be rather limited. There is a clear theoretical contribution to delineate what is or is not possible under uncoupled learning rules. Furthermore, the control-theoretic tools used herein enable the analysis of higher-order multi-agent learning beyond widely studied cases such as zero-sum games or pure strategy equilibria. Regarding the statement that the results are merely existential, there are approaches in the control theory literature to construct higher-order dynamics. We chose to omit these constructions since the focus here is on fundamental limits under uncoupled learning. Weakness>> The paper could have done a better job in building upon more recent literature We can include more recent literature, such as the paper mentioned by the reviewer. However, none of the recent literature addresses the specific questions of the present paper. --- Rebuttal Comment 1.1: Title: Post Rebuttal Acknowledgement Comment: I thank the authors for responding to my comments. After reading their response and the other reviews, I conclude that all of my concerns (all 5 points in the weaknesses that I mentioned and many of the points raised in the questions) stem from my poor understanding of the techniques that are used in the paper. I still think that the authors could have provided a better exposition to aid readers that are familiar with game-theoretic learning dynamics but not necessarily with the tools used in the paper, but I don't insist that the authors should do any particular changes regarding what to include in the main part nor in their references. I only encourage the authors to implement the changes proposed in their response to my comments above that they think will improve their paper. Based on the above, I increase my score from 3 to 5 but I reduce my confidence from 3 to 2.
Summary: This paper studies multi-agent learning dynamics and the central question is if there is an iterative learning process in a multiplayer game that leads to a Nash equilibrium. There has been substantial prior work on this question for many specific games and learning strategies—generally it is not the case that known dynamics lead to a Nash equilibrium and this is perhaps frustrating as it would be convenient to be able to find Nash equilibria this way. The authors approach this problem in a way that is novel, as far as I know, by specifying a class of interesting/acceptable learning dynamics and using tools from feedback control systems to prove properties of that class. For interesting/acceptable, they require that dynamics are "payoff based," which essentially limits the ability of each agent to see "into" the action space of other agents, and, I believe for tractability of the analysis, they limit the dynamics to a kind of generalized higher-order gradient play. They specifically show that 1. If a game has a strictly mixed Nash equilibrium, there exist payoff-based dynamics that converge locally to that NE. (As a consequence, they show that these dynamics also converge to NE of "nearby" games.) 2. However, there is no "overall good" dynamics—for any such dynamics, there exists a game with unique mixed NE such that these dynamics are unstable at that NE. Strengths: The results are likely to be novel and provide important context to those who study multi-agent learning dynamics. As is typical, there are some asterisks (the particular learning dynamics are general, but I think the broader questions could motivate looking at an even more general class). The negative result about games with a unique NE seems particularly strong. The authors have done substantial work to make the results interpretable, at least on the surface, to a broader audience that is not familiar with the control-theoretic tools they are using. I would challenge the authors to go even further with this (see weaknesses below). Weaknesses: For someone who is not familiar with the specific tools the authors use, the paper is quite hard to follow, even if the game-theoretic aspects are clear and familiar. Appendix D has some worked examples with plots in them. I would suggest thinking about whether it is possible to move some of this content to the main paper. The figures in the paper itself are not that helpful and could be improved, potentially with more detail. The paper lacks good figures currently. Anything the authors can do to further broaden the context of their results would be helpful. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would be interested in the authors' thoughts on whether they view the class of higher-order dynamics they study as restrictive or not. It would be helpful to know, of the papers they cite, in which cases their dynamics class subsumes that of the methods proposed by that paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Yes, this is a theoretical work and the assumptions of the theorems are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question>> I would be interested in the authors' thoughts on whether they view the class of higher-order dynamics they study as restrictive or not. It would be helpful to know, of the papers they cite, in which cases their dynamics class subsumes that of the methods proposed by that paper. For the convergence results of Section 4, more restrictions strengthen the results in that additional structures are not needed. The framework set forth in Section 2.3 does capture prior classes of higher-order learning dynamics characterized by differential equations. The paper goes on to restrict its attention to higher-order gradient learning. Weakness>> For someone who is not familiar with the specific tools the authors use, the paper is quite hard to follow, even if the game-theoretic aspects are clear and familiar. Appendix D has some worked examples with plots in them. I would suggest thinking about whether it is possible to move some of this content to the main paper. The figures in the paper itself are not that helpful and could be improved, potentially with more detail. The paper lacks good figures currently. Anything the authors can do to further broaden the context of their results would be helpful. Working within the page limitations, one possibility is to move the proof sketches to the appendix and bring a couple of examples to the main body. The message of Figure 1 is an important one to make the connection to feedback systems. We will improve the caption to reinforce this message. --- Rebuttal Comment 1.1: Title: Reponse to the authors Comment: I'm not fully satisfied with the authors' responses (or the paper in general—it is just really difficult). But I will keep my score—I think the paper still has substantial strengths.
Summary: This paper studies higher order payoff based learning, and in particular higher order gradient play in this setting. The authors show that for games with isolated completely mixed NE, there are higher-order gradient play dynamics that converge to that NE. Moreover, that same NE is converged to in ‘nearby’ games as well. However, the authors also show that there exist anti-coordination games where higher-order gradient play dynamics fail to converge to NE. Finally the authors also argue that dynamics that do lead to NE in a coordination game must be inherently unstable. Strengths: I find the paper to be well structured and readable. The results in the context of uncoupled learning dynamics in games are also interesting and meaningfully extend existing ideas into higher order dynamics. Weaknesses: A weak point in the paper for me is that the results are based upon the assumption that the mixed NE is a practically useful solution concept in learning in games. However, recent works have shown that the NE can often not only be a poor metric for players’ performance, but also it is unnatural for players using decentralized dynamics to converge to an NE in general games. Thus, for a paper that focuses on higher order learning it would have been much more compelling to focus on a more complete picture of higher order gradient dynamics. What characterizes stable equilibria/fixed points for these dynamics in this setting? In cases where NE are not stable, what do the dynamics look like? In my opinion, a broader view of the dynamical system properties would make the results more interesting and useful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For the higher order dynamics, is there intuition about the bandit setting where players only observe (potentially random) realizations of their payoffs? This seems more reasonable for the cases where payoff vectors are large/there are a large number of players and complexity is a concern. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations of the dynamics studied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question>> For the higher order dynamics, is there intuition about the bandit setting where players only observe (potentially random) realizations of their payoffs? This seems more reasonable for the cases where payoff vectors are large/there are a large number of players and complexity is a concern. As noted by the reviewer, a main issue is where payoff vectors are large (a large number of players need not imply a large payoff vector). We believe that the present results can be used to analyze instantaneous scalar payoffs in the case of discrete-time learning with randomized action selection. The continuous time ordinary differential equations (ODEs) presented in this paper can be seen as the ODEs that emerge from the ODE method of stochastic approximation (e.g., Benaim, “A dynamical system approach to stochastic approximations, 1996) to analyze discrete time stochastic iterations. Prior work (Fudenberg and Levine, “Consistency and cautious fictitious play”, 1995) illustrates how the scalar payoff case of fictitious play can be analyzed using such methods. This approach also was utilized to analyzed higher-order learning under scalar payoffs for the specific case of “anticipatory” higher order learning (Arslan and Shamma, “Distributed convergence to Nash equilibria with local utility measurements”). Likewise, we believe that the case of instantaneous scalar payoffs can be addressed using the setting in the present paper as the basis of the emergent ODEs of stochastic approximation. Weakness>> A weak point in the paper for me is that the results are based upon the assumption that the mixed NE is a practically useful solution concept in learning in games. However, recent works have shown that the NE can often not only be a poor metric for players’ performance, but also it is unnatural for players using decentralized dynamics to converge to an NE in general games. Thus, for a paper that focuses on higher order learning it would have been much more compelling to focus on a more complete picture of higher order gradient dynamics. What characterizes stable equilibria/fixed points for these dynamics in this setting? In cases where NE are not stable, what do the dynamics look like? In my opinion, a broader view of the dynamical system properties would make the results more interesting and useful. The question of the relevance of Nash Equilibrium is a long standing one that won’t be resolved in this paper. Nonetheless, interest in Nash Equilibrium remains widespread. Also, studying higher-order uncoupled dynamics is an interesting topic on its own. We further believe that these results may be relevant to settings other than mixed-strategy Nash equilibrium, such as higher-order multi-agent learning in the absence of strict convexity. As recognized by the reviewer, recent work shows the lack of natural decentralized dynamics to converge to a NE in general. This paper contributes to that line of work in relaxing the requirement of generality (Section 4), but also showing the lack of universality (Section 5). Furthermore, the topic of strong stabilization reinforces that the lack of natural dynamics to converge to the mixed equilibrium of a coordination game (Section 6). Regarding the characterization of stable equilibria (beyond eigenvalues), the results of this paper establish that it depends on both the specifics of the game and the structure of higher-order dynamics. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thank you for your clarifications and explanations. It clears up some doubts I had about the paper and if the authors would add some further explanation and comparison of their dynamics with previous work and clarify their contributions in the paper, I think the paper would be very suitable for NeurIPS. Best regards, Reviewer mLZz
Summary: The paper shows that for any finite game with an isolated completely mixed Nash Equilibrium, there exist a payoff based higher-order gradient play dynamics that lead (locally) to that Nash equilibrium, both for this game and all payoff nearby games. Conversely, they show that for any higher-order gradient play dynamics, there is a game with a unique isolated completely mixed Nash equilibrium for which that dynamics do not locally converge to that Nash Equilibrium. Hart and A. Mas-Colell proved using an anti-coordination game that no first-order uncoupled dynamics leads to the unique interior Nash equilibrium of that game. Shamma and Arslan (2005) proved that there are higher order dynamics which leads to that equilibrium. This paper shows that this extends to all games. Strengths: Tools used are, up to my knowledge, not standard in evolutionary game theory: decentralized stabilizing control and root-locus which characterizes the locations of the eigenvalues of a matrix as a function of a scalar parameter! Weaknesses: The biggest weakness for me is that there is no micro-foundation of the class higher order dynamics nor a comparison with the previously studied higher order dynamics. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are your dynamics more general than all the previously studied one (such as [18, 19, 20, 23] etc)? - Do you have any micro-foundation of your higher order dynamics? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The converse result (for any higher-order dynamics, there is a game with a unique isolated completely mixed Nash equilibrium) is proved only for gradient play dynamics and not all higher order dynamics ! Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question>> Are your dynamics more general that all the previously studied ones (such as [18, 19, 20, 23] etc)? The above papers are specific instances of higher-order dynamics. Setting aside continuous-time/discrete-time differences, the framework of higher-order learning outlined in Section 2.3 is more general. The convergence results of Section 4 shows that higher-order gradient play is sufficient to lead to any mixed-equilibrium, and so more generality is not needed. Question>> Do you have any micro-foundation of your higher-order dynamics? The issue of micro-foundations depends on the specifics of the higher-order dynamics. The structure presented in Section 2.3 allows a higher-order augmentation to existing learning rules for which there are micro-foundations (e.g., Sandholm, “Population Games and Deterministic Evolutionary Dynamics”, in Handbook of Game Theory with Economic Applications, 2015). These learning rules have agents reacting to the payoffs, and higher-order dynamics can capture path dependent phenomena in these payoffs such as recency bias (e.g., Fudenberg and Levine, “Recency, consistent learning, and Nash equilibrium”, 2014). Weakness>> The biggest weakness for me is that there is no micro-foundation of the class higher order dynamics nor a comparison with the previously studied higher order dynamics. See above response to the questions raised by this reviewer. --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply. However, they don't completely solve my concerns. For example, the authors in [23] spent some effort to justify their dynamics, even if they are a natural extension of the replicator dynamics. It would be of interest if the authors investigate seriously the foundational question. Also, when they claim that all the previous dynamics are a particular case of their dynamics, I can believe it but adding some examples in this direction would improve the paper. That said, I believe this is a good paper, which contains some important contributions and technics and so is a good candidate for Neurips.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper shows the lack of universality on the side of both games and learning dynamics (even for higher-order ones)! Particularly, for any game with a mixed-strategy Nash equilibrium (NE), there exists uncoupled payoff-based (possibly high-order) dynamic converging locally to the NE. However, any such dynamics can also be destabilized by a suitable anti-coordination game. Notably, the paper uses classical analysis methods in feedback control systems. Highlighted similarities between higher-order learning in games and higher-order optimization algorithms, such as momentum-based or optimistic gradient algorithms, are also interesting. Strengths: The widely studied fictitious play dynamics are known to converge equilibrium in many interesting games but not all of them, e.g., see Shapley's counterexample. Therefore, researchers were looking for a learning dynamic that can converge to equilibrium in every game to justify equilibrium analysis. However, Hart and Mas-Colell, Ref. (16), proved the negative result that there does not exist (first-order) uncoupled learning dynamics that can converge to equilibrium in anti-coordination games, and therefore, there cannot be universally convergent (uncoupled) learning dynamics. Later, Shamma and Arslan, Ref. (17), showed that higher-order learning dynamics can converge to equilibrium in anti-coordination games. This paper provides a more general result saying that for any game (with a mixed-strategy Nash equilibrium), there exists a (possibly high-order) payoff-based learning dynamic that can converge locally to that equilibrium. Seeing this, researchers may start looking for universally convergent higher-order learning dynamic that can converge to equilibrium in every game. However, the paper proves the negative result that given any such higher-order dynamics, there always exists a certain anti-coordination game in which the dynamics do not converge to equilibrium. Therefore, we can view this paper as a generalization of Hart and Mas-Collel's seminal negative result to higher-order learning dynamics. Note that Foster and Young designed uncoupled stochastic rules, known as regret testing, that can converge probabilistically to Nash equilibrium in every two-player strategic-form game. However, the convergence is in the relatively weak sense by saying that players will be at equilibrium most of the time though they may move away from it. Note also that complexity results related to Nash equilibrium computation are not relevant since the paper focuses on asymptotic convergence for finite games. A mixed-strategy Nash equilibrium always exists in strategic-form games, with finitely many player and action. Because of these reasons, I believe that this result is worth being taught in (advanced) game theory courses. I acknowledge that I have read the rebuttal. Weaknesses: - Figures might include captions with more detailed descriptions. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - What is the main obstacle to address instantaneous scalar payoffs rather than payoff vector setup? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are highlighted explicitly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question>> What is the main obstacle to address instantaneous scalar payoffs rather than payoff vector setup? We believe that the present results can be used to analyze instantaneous scalar payoffs in the case of discrete-time learning with randomized action selection. The continuous time ordinary differential equations (ODEs) presented in this paper can be seen as the ODEs that emerge from the ODE method of stochastic approximation (e.g., Benaim, “A dynamical system approach to stochastic approximations, 1996) to analyze discrete time stochastic iterations. Prior work (Fudenberg and Levine, “Consistency and cautious fictitious play”, 1995) illustrates how the scalar payoff case of fictitious play can be analyzed using such methods. This approach also was utilized to analyzed higher-order learning under scalar payoffs for the specific case of “anticipatory” higher order learning (Arslan and Shamma, “Distributed convergence to Nash equilibria with local utility measurements”, 2004). Likewise, we believe that the case of instantaneous scalar payoffs can be addressed using the setting in the present paper as the basis of the emergent ODEs of stochastic approximation. Weakness>> Figures might include captions with more detailed descriptions. The final version will revisit the figure captions to add more details.
null
null
null
null
null
null
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective
Accept (poster)
Summary: The authors propose Scheduled Weight Decay (SWD), a method that mitigates the large gradient norm issue caused by constant weight decay factors. The authors demonstrate that SWD can improve the generalization performance of Adam and outperform other adaptive optimizers on CIFAR-10/100 datasets. Strengths: 1. The proposed method is simple and easy to implement. Moreover, it addresses the long-standing generalization gap between adaptive optimizers and SGD on certain tasks. 2. The authors provide a theoretical analysis of the problem of unstable stationary points. 3. The authors conduct extensive experiments to validate their claims and compare their method with other optimizers. Weaknesses: 1. The gradient analyzed in Theorem 2 seems to include the gradient of L2 regularization, which is not included in the empirical analysis (e.g. Fig. 2). 2. Table 1 in appendix and table 1 in the main text show very different results, especially for SGD. The reason for this difference should be made more clear. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations of the proposed method on datasets other than CIFAR-10/100. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer tfGK’s kind support and helpful comments. The reviewer definitely realized the important value of identifying and mitigating the overlooked but serious pitfalls of weight decay. We gratefully hope the reviewer can express and insist on your opinion to avoid a possible loss of our community. We also properly addressed your concerns as follows. Q1: The gradient analyzed in Theorem 2 seems to include the gradient of L2 regularization, which is not included in the empirical analysis (e.g. Fig. 2). A1: Thanks for the helpful comment. We will discuss this point in the revision. Figure 2 indeed plots the gradient norms without regularization included. However, we argue that the empirical conclusions are same with or without regularization. because the numerical difference due to regularization is multiple orders of magnitude smaller than their gradient norms in our experiments. Q2: Table 1 in appendix and table 1 in the main text show very different results, especially for SGD. The reason for this difference should be made more clear. A2: Again, thanks for the helpful comment. The difference between two Tables 1 in the main text and the appendix lies in the hyperparameter choice of weight decay. We discuss why some previous papers’ results are more closed to Table 1 in the appendix. Some previous papers designed novel adaptive optimizers and claimed they generalize as well as SGD only because an improper weight decay strength is chosen for SGD. We have discussed this point in Line 85-101 of the appendix. We will follow your suggestion to present the point more clearly.
Summary: This paper studies the overlooked pitfalls of weight decay, a regularization technique used in deep neural networks (DNNs). The authors discovered that weight decay can lead to large gradient norms, particularly at the final phase of training, often indicating poor convergence and generalization. To address this issue, the authors propose a novel method called Scheduled Weight Decay (SWD), which dynamically adjusts weight decay strength according to the gradient norm and penalizes large gradient norms during training. The paper concludes that the SWD approach outperforms the conventional constant weight decay strategy, especially for the Adaptive Moment Estimation (Adam) optimization algorithm. Strengths: 1. SWD dynamically adjusts weight decay, which can help to avoid large gradient norms that can lead to poor convergence and generalization. The paper demonstrates that SWD outperforms the traditional constant weight decay, especially in Adam optimization, potentially leading to better model performance. 2. The concept is simple to understand and can be easily implemented in most deep learning frameworks. Weaknesses: 1. The proposal is specific to weight decay regularization and might not generalize to other regularization techniques. 2. SWD's performance might depend heavily on how well the scheduling function is chosen, which can be a complex task. 3. The paper doesn't discuss how SWD might affect the training time, and adjusting weight decay dynamically could potentially increase computational complexity. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How does SWD affect the training time, given the additional complexity introduced by dynamically adjusting the weight decay? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer 3dfT’s kind support and helpful comments. We gratefully hope the reviewer can express and insist on your opinions, which may help our community understand and employ weight decay better via our work. We also properly respond to your concerns as follows. Q1: The proposal is specific to weight decay regularization and might not generalize to other regularization techniques. A1: Thanks for the comment. It is true, while the value of our method is still significant given the importance and popularity of weight decay. Q2: SWD's performance might depend heavily on how well the scheduling function is chosen, which can be a complex task. A2: We respectfully note that SWD itself is an adaptive weight decay scheduler which directly depends on gradient norms and requires no further manual design. We also agree that it will be possible to design other more complex scheduler for weight decay in future. Q3: How does SWD affect the training time, given the additional complexity introduced by dynamically adjusting the weight decay? A3: In Line 201, we mentioned that the cost difference of AdamS versus AdamW/Adam is nearly ignorable (usually less than 5%) in practice. This is not surprising because computing mean(v) is very cheap compared with computing gradients per interation. We will emphasize this point more clearly in the revision. --- Rebuttal Comment 1.1: Comment: After reading the authors response, I will keep the original score.
Summary: This paper studies the role of weight decay and its connection with large gradient norms in deep learning settings. In particular, the paper highlights differences in variants of weight decay and also the effect of weight decay on gradient norm in the final phase. Based on the observation that weight decay yields large gradient norms, the authors propose a scheduler for weight decay, called the Scheduled Weight Decay, which dynamically adjusts the weight decay strength. Strengths: The paper is reasonably well-written and the motivation for studying the effect of weight decay is both important and clear. The approach is easy to implement and the results on some smaller scale benchmarks seem encouraging. Weaknesses: (1) The theoretical results are weak and not very interesting. Theorem 2 follows from standard convergence rates for SGD. I could not find any novelty in this result. The authors should highlight the novelty of the result and proof technique in the main paper. (2) While Theorem 1 seems interesting, it seems somewhat weak. The lower bound on the difference in learning rate is somewhat artificial and unreasonable. For instance, consider decreasing learning rates of SGD typically used in machine learning settings i.e., eta_t >= eta_{t+1}. Then Theorem 1 assumes delta <= eta_t - eta_{t+1}. This implies eta_0 >= delta . t for any t > 0. This makes sense if delta is going to 0 as t -> infinity. Maybe I am missing something but would really appreciate it if the authors can comment about this.  (3) Theorem 1 & 2 seem to analyze gradients of two different functions, which doesn't seem like a proper comparison. More remarks regarding this will be valuable. Furthermore, I would like to get clarification if the y-axis in plots of Figure 2 are gradient norms with regularization included. (4) Definition 1 is very unclear. The authors should provide more remarks around definition 1 to help readers understand the definition better. (5) The dependence of C_2 on sup norm of theta is somewhat weird in Theorem 2. Are you assuming ||theta|| is bounded? (otherwise rewrite the theorem statement without this dependence) . (6) I think the theoretical basis for Gradient-Norm-Aware Scheduled Weight Decay is weak as presented in the paper. While I understand the intuition, it is not clear why this particular proposal is meaningful. Since the theoretical basis for this is lacking, I would expect much more comprehensive experiments to support the method. Unfortunately, the empirical analysis in the paper is somewhat limited. Line 20: eta -> eta_t is the learning rate & Line 125: E[||∇L(θ, X) − ∇L(θ)||^2] ≤ \sigma^2? Post-rebuttal ========== I think the theoretical analysis is still not convincing but given that these observations are interesting, I am slightly increasing the score. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the concerns raised in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer xE1B’s hard work and helpful comments. The comments shows that the reviewer only has concerns about theoretical evidences in our work. Our theoretical analysis is proposed not as a main contribution but to explain our interesting findings. Our main contributions are revealing the overlooked pitfalls of weight decay and how to mitigate them. If a simple theoretical mechanism can well describe the pitfalls, it will be an advantage rather than a weakness. Given the importance of weight decay, our contributions are significant. If the reviewer indeed accepts the reported overlooked pitfalls and the effectiveness of SWD, we strongly encourage the reviewer to re-evaluate the importance of our contributions. We also try to address the mentioned weaknesses about theoretical evidences as follows. Q1: Theorem 2 follows from standard convergence rates for SGD. I could not find any novelty in this result. The authors should highlight the novelty of the result and proof technique in the main paper. A1: We frankly admit that Theorem 2 can be easily obtained from standard convergence rates with the modification of weight decay. However, we believe that the simplicity of theoretical analysis is not a weakness. Again, our theoretical analysis is proposed not as a main contribution but to explain our interesting findings, namely the large-gradient-norm pitfall and SWD. We argue with Ockham's Razor that if a simple theory can explain an interesting novel finding, it will be unnecessary to pursue a complex theory with more tricks. Q2: While Theorem 1 seems interesting, it seems somewhat weak. The lower bound on the difference in learning rate is somewhat artificial and unreasonable. For instance, consider decreasing learning rates of SGD typically used in machine learning settings i.e., eta_t >= eta_{t+1}. Then Theorem 1 assumes delta <= eta_t - eta_{t+1}. This implies eta_0 >= delta . t for any t > 0. This makes sense if delta is going to 0 as t -> infinity. Maybe I am missing something but would really appreciate it if the authors can comment about this. A2: We would sincerely apologize if we misunderstood your question. In principle, if t approaches infinity and $\eta_{t}$ approaches, $delta$ and the lower bound indeed will approach to zero. However, in practice with finite iterations and a learning rate schedule, people can only have a finite learning rate. More importantly, Theorem 1 suggests that adaptive optimizaters can lead to a more significant large-gradient-norm pitfall, as $\delta$ as well as its preconditioned learning rate $\frac{\eta_{t}}{\sqrt{v_{t}}}$ can be unstable and large during training. Q3: Theorem 1 & 2 seem to analyze gradients of two different functions, which doesn't seem like a proper comparison. More remarks regarding this will be valuable. Furthermore, I would like to get clarification if the y-axis in plots of Figure 2 are gradient norms with regularization included. A3: Thanks for the constructive comment. Theorem 1 and 2 both analyze the regularized loss function $f(\theta)$, so the messages carried by them are consistent. We note that the expression of $f(\theta)$ can be slightly different due to the types of weight decay. We use the form of vanilla weight decay in Theorem 1 and use the form standard $L_{2}$ regularization/decoupled weight decay in Theorem 2, where $L_{2}$ regularization and decoupled weight decay are identical and common for SGD. We will follow your suggestion to make them more clear in the revision. Figure 2 plots the gradient norms without regularization included. However, the empirical conclusions are same with or without regularization, because the numerical difference due to regularization is multiple orders of magnitude smaller than their gradient norms in our experiments. Q4: Definition 1 is very unclear. The authors should provide more remarks around definition 1 to help readers understand the definition better. A4: Thanks for the constructive comment. We will provide more remarks and propose a more formal Definition 1. Q5: I think the theoretical basis for Gradient-Norm-Aware Scheduled Weight Decay is weak as presented in the paper. While I understand the intuition, it is not clear why this particular proposal is meaningful. Since the theoretical basis for this is lacking, I would expect much more comprehensive experiments to support the method. Unfortunately, the empirical analysis in the paper is somewhat limited. A5: We respectfully argue that the prior purpose of our theoretical analysis is to demonstrate the existence of the overlooked pitfalls of weight decay, because identifying the overlooked large-gradient-norm pitfall of weight decay is our first and most valuable contribution. After the demonstration of the overlooked pitfalls, the proposed Scheduled Weight Decay (SWD) method is a naturally inspired algorithm for mitigating the large-gradient-norm pitfall. The effectiveness of SWD for mitigating the overlooked pitfalls are extremely significant (e.g., Figures 3 and 4). We admit that we have no rigous generalization bound theory of scheduling weight decay at present. Formal theories of analyzing weight decay and learning rate schedulers are interesting. We will leave developing formal theories of analyzing weight decay schedulers as future explorations. Q6: Typos. A6: Thank a lot for pointing out the typos. We will correct them in the revision. --- Rebuttal Comment 1.1: Title: More Discussion? Comment: We hope our responses could address your concerns. If there are any further questions, we are very glad to continue the discussion!
Summary: I would divide this paper's contributions into two parts. The first part is an algorithm (a variant of Adam) which the authors argue generalizes better than Adam/AdamW and is easier to tune. The second part is the justification for the effectiveness of that algorithm. **The algorithm itself** The proposed algorithm, AdamS, is similar to AdamW, except that the weight decay strength is divided by the mean of the current squared gradient EMA. This is effectively a schedule for weight decay. It has the effect of penalizing the overall weights less strongly during phases of training when the overall gradient is large, and penalizing the overall weights more strongly during phases of training when the overall gradient is small. For example, Figure 3 shows gradient norm and weight decay strength during the training of ResNet-34 on CIFAR-10; after the first learning rate drop, squared gradient norm grows, which causes the weight decay strength to shrink; after the second learning rate drop, squared learning rate plummets, which causes weight decay to rise. **Experimental evaluation** Table 1 and section 5 experimentally evaluate AdamS and argue that the algorithm is better generalizing / easier to tune than AdamW. **Justification**: The authors justify their algorithm along the following lines (which I disagree with; see more below): 1. When the learning rate is potentially changing, gradient descent with weight decay has trouble converging to stationary points - in particular, the authors argue (Theorem 1) that in this setting, the gradient norm will be lower bounded near stationary points. Therefore, the authors argue that weight decay causes the gradient to be large, especially at the end of training. 2. The authors argue that large gradients are bad for generalization. 3. As a consequence of #1 and #2, AdamW-style weight decay is bad for generalization (since it causes large gradients near stationary points, and large gradients are bad for generalization) The authors acknowledge that weight decay on networks with normalization layers has been shown in the literature to control the effective learning rate, but insist that their paper's mechanism is unrelated to this, and propose an entirely different mechanism. --- **post-rebuttal update** After discussion with the authors, I'm raising my score from 3 ("reject") to 6 ("weak accept"). Firstly, it's clear that my original review was wrong; I had argued that the proposed algorithm's benefits are probably due to the interaction between weight decay and normalization layers, but the authors have correctly pointed out that the same phenomena occur on networks without normalization layers, and I have verified this myself. The reason for 6 as opposed to 7 is that I believe that the proposed theoretical explanation for why weight decay would cause large gradients is not the real explanation (reviewer xE1B also didn't like this theoretical portion of the paper). That said, it's entirely plausible that weight decay causing large gradients is one of the *many* aspects of deep learning that is just theoretically unexplainable at the present time. So, my issue with the submission is less that the paper fails to include a correct theoretical justification, and more that it includes a faulty theoretical justification; I think it is better to have no justification at all than to have a faulty justification. That being said, on the positive side, it seems there is a good chance that the proposed algorithm is a worthwhile addition to the deep learning toolbox (though I am not an expert at evaluating this type of contribution). Strengths: I'm not an expert in experimentally evaluating the performance of neural network training algorithms, but from the experiments in section 5 it does seem plausible to me that AdamS is indeed better / easier to tune than AdamW. Weaknesses: While I can accept that the algorithm may be a good idea, I strongly disagree with the proposed justification given in the paper. I believe the effectiveness of the algorithm is probably unrelated to the justification given in the paper, and is instead closely linked to the implicit effects of weight decay on effective learning rate when the network has normalization layers. First, I would note that all of the experiments in the paper are on networks with normalization layers. The authors argue in Figure 2 that VGG-16 is "not scale invariant" but the VGG-16 architecture does have many BatchNorm layers, even though it is not fully scale-invariant, so I believe that the literature on scale invariance is still quite related to what is going on with the VGG-16. For networks with normalization layers, there is a clear mechanism by which weight decay causes large gradients. If $L$ is the loss for a scale-invariant network, then $\nabla L(c \theta) = \frac{1}{c} \nabla L(\theta)$, i.e. scaling down the weights $(c < 1)$ will automatically scale up the gradients (see Lemma 1 here: http://www.offconvex.org/2020/04/24/ExpLR1/). The large gradients have been reported to cause the 'effective learning rate' of training to be higher, which is thought to be good for generalization. My intuition for the mechanism in this paper is: if the gradient norm is too large, our effective learning rate will be too large for the algorithm to converge, so we need to decrease weight decay so the gradient norms come back down; if the gradient norm is too small, our effective learning rate will be too small (which makes convergence slow and is bad for generalization) so we need to increase weight decay so that the gradient norm moves back up. By contrast, the proposed justification doesn't really make sense to me. Figure 2 shows that higher weight decay causes growth in the gradient norm _in the middle of training_, not at convergence. This is much more easily explained by scale-invariance (for the VGG-16, the scale invariance of certain layers) than by Theorems 1 and 2. Figure 5 shows that weight decay increases the eigenvalue spectrum at convergence. I believe this is related to cross-entropy loss, and wouldn't happen with e.g. squared loss. For cross-entropy loss, the Hessian is small when the margins are large. Adding weight decay prevents the margins from becoming large and therefore the Hessian from shrinking. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How do the authors respond to my critique above? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: discussed above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer e7t4 for hard work and admitting the value of our contribution. We notice that Reviewer e7t4 currently tends to reject our work only because the reviewer believes an alternative scale-invariance mechanism of our experiments. However, we respectfully note that the justification of Reviewer e7t4 made a key factual mistake, which misled Reviewer e7t4’s rating on our work. Standard VGG-16 (in the original paper and our paper) actually has no BatchNorm layers, while sometimes the so-called VGG-16-bn (with BatchNorm) is used in other papers. Thus, as we mentioned in the paper, scale invariance itself cannot explain the observed pitfalls of weight decay. So the belief that SWD is unrelated to large gradient norms and the associated ``evidence” are both inconvincible. We have duly addressed your justifications as follows. Q1: I believe the effectiveness of the algorithm is probably unrelated to the justification given in the paper, and is instead closely linked to the implicit effects of weight decay on effective learning rate when the network has normalization layers. A1: The belief that SWD is not related to mitigating large gradient norms has no true evidence. In contrast, we showed that SWD mitigates the overlooked pitfalls of weight decay with empirical and theoretical evidence. We did not observe that SWD or the pitfalls behave qualitatively different on ResNet (with BatchNorm) or VGG(without BatchNorm). Q2: I would note that all of the experiments in the paper are on networks with normalization layers. The authors argue in Figure 2 that VGG-16 is "not scale invariant" but the VGG-16 architecture does have many BatchNorm layers, even though it is not fully scale-invariant, so I believe that the literature on scale invariance is still quite related to what is going on with the VGG-16. A2: We respectfully point out that the first judgement is wrong. Standard VGG which we used is indeed not scale invariant and has no BatchNorm layer. Q3: My intuition for the mechanism in this paper is: if the gradient norm is too large, our effective learning rate will be too large for the algorithm to converge, so we need to decrease weight decay so the gradient norms come back down; if the gradient norm is too small, our effective learning rate will be too small (which makes convergence slow and is bad for generalization) so we need to increase weight decay so that the gradient norm moves back up. By contrast, the proposed justification doesn't really make sense to me. A3: We totally agree with the intuition (except that it only connects to scale invariance). Moreover, this intuition is actually NOT contradicted to the large-gradient norm pitfall. The large-gradient-norm pitfall is exactly a signal of poor convergence and generalization at the final training phase. The proposed SWD indeed automatically adjusts the strength of weight decay and then successfully mitigate the large gradient norms as well as poor convergence/generalization. Q4: Figure 2 shows that higher weight decay causes growth in the gradient norm in the middle of training, not at convergence. This is much more easily explained by scale-invariance (for the VGG-16, the scale invariance of certain layers) than by Theorems 1 and 2. A4: Figure 2 shows that weight decay causes growth in the gradient norm in both the middle phase and final phase of training, while the gradient norm at the final phase is relatively lower. This can be explained by Theorems 1 and 2, but cannot by explained scale invariance only (the VGG-16 has no scale-invariant layer). Moreover, the gradient norm growth at the final phase is more interesting, because it closely relates to poor convergence and generalization. Q5: Figure 5 shows that weight decay increases the eigenvalue spectrum at convergence. I believe this is related to cross-entropy loss, and wouldn't happen with e.g. squared loss. For cross-entropy loss, the Hessian is small when the margins are large. Adding weight decay prevents the margins from becoming large and therefore the Hessian from shrinking. A5: Thanks for the comments. We just reported the interesting observation that the Hessian eigenspectrum and minima sharpness are significantly affected by SWD. Your comment may be right but does not indicate any weakness of our work. Finally, we sincerely thanks Reviewer e7t4 for the hard work and comments again. We strongly encourage the reviewer to re-evaluate our work without the distraction of misunderstanding and factual mistake. We appreciate it very much in advance. --- Rebuttal Comment 1.1: Title: question Comment: Hmm, ok, I'll have to think about this. A question: on networks with no batch normalization, does SGD with weight decay also cause high gradient norm mid-training? (That is, does mid-training gradient norm get higher as weight decay strength gets higher?) Or is this just something that Adam does? --- Reply to Comment 1.1.1: Title: Grateful thanks and Response to Additional Questions Comment: We gratefully thank the reviewer for the prompt reply and carefully re-evaluate our work. Your responsibility will be highly appreciated and is exactly what the whole community eagerly expects. We respond to your additional questions as follows. Q6: on networks with no batch normalization, does SGD with weight decay also cause high gradient norm mid-training? (That is, does mid-training gradient norm get higher as weight decay strength gets higher?) Or is this just something that Adam does? A6: Yes, SGD with weight decay also significantly cause high gradient norm during training. The observation is quite general. It does not only happen to Adam. Q7: Additionally, could you point me towards code for a VGG-16 sized for CIFAR-10 with no batch norm? A7: Yes, of course. The following github repo (https://github.com/kuangliu/pytorch-cifar/blob/master/models/vgg.py) also contains a very population implementation of VGG for CIFAR-10/100. Moreover, the torchvision.models.vgg16 is also a standard VGG-16 without BatchNorm (usually for ImageNet). The VGG-16 with BatchNorm refers to as torchvision.models.vgg16_bn.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Novel Approach for Effective Multi-View Clustering with Information-Theoretic Perspective
Accept (poster)
Summary: This paper proposes two methods for multi-view clustering, which aims at grouping data from multiple sources or perspectives. The first part, SCMVC uses a consistent variational lower bound to learn consistent information among views. The second part, SUMVC extends the information bottleneck principle to reduce redundancy and achieve sufficient representation among views. The proposed SUMVC consists of two terms: a consistent variational lower bound and a sufficient representation lower bound to enhance consistency and minimize redundancy among views. The authors leverage information bottleneck theory and variational analysis to develop the model. The paper also provides a theoretical analysis of the generalization error of the learned representations based on the Bayes error rate. The paper evaluates the proposed methods on four real-world multi-view datasets. Strengths: 1. The paper addresses an important and challenging problem of MVC with an information-theoretic perspective. This paper proposes novel lower bounds to address the issue of view redundancy and consistency. The authors utilize Bayes Error Rate to provide a theoretical explanation of the effectiveness of the proposed method. 3. The paper presents extensive experiments on four real-world multi-view datasets and demonstrates the superior performance of the proposed methods over existing methods. Weaknesses: 1. The proposed objective function in Eq. (4) shares similarities with that of VAE [1], and it would be beneficial if the author could provide further explanations on this matter. How does this loss function differ from that of VAE? Is it merely an extension of the single-view approach to multi-view data? Additionally, what is the reason for SCMVC achieving significantly better performance than VAE-based methods in Table 2? 2. In SCMVC, the fusion representation $\vec{Z}$ is directly optimized, while it is not in SUMVC. Could this approach negatively impact the performance of SUMVC? 3. The methods discussed in this article may not be effectively applied to datasets with high heterogeneity. 4. The introduction section contains some complex sentences. Simplifying the language and structure could improve clarity. 5. It would be beneficial for the authors to provide a more comprehensive discussion of some missed information-based multi-view methods [2-5] in the related work section. [1] Auto-Encoding Variational Bayes, NIPS’13 [2] COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction, CVPR'21 [3] Rethinking Minimal Sufficient Representation in Contrastive Learning, CVPR’ 22 [4] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning, TPAMI'23 [5] Multi-view information-bottleneck representation learning, AAAI’21 Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors mention that the methods discussed in this article cannot be effectively applied to datasets with high heterogeneity. Could authors provide a detailed analysis? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see weakness. Also, the complexity analysis is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your invaluable comments and suggestions. We have addressed the points you raised as follows. 1. The proposed objective function in Eq. (4) shares similarities with that of VAE [1], and it would be beneficial if the author could provide further explanations on this matter. How does this loss function differ from that of VAE? Is it merely an extension of the single-view approach to multi-view data? Additionally, what is the reason for SCMVC achieving significantly better performance than VAE-based methods in Table 2? Thank you for your thoughtful comments. You are correct in noticing the similarity between the proposed objective function in our paper and that of the VAE. The objective function in our method is indeed a variant of the VAE loss function. However, it is specifically designed to handle multi-view data, while the VAE loss function is typically used for single-view data. This introduces significant differences in terms of functionality and application between the two. Our method is not merely an extension of the single-view approach to multi-view data. It takes into account the unique characteristics of multi-view data, such as inter-view correlations and view-specific features, which are not considered in standard VAE-based methods. This ability to exploit multi-view information is a key factor distinguishing our loss function from that of VAE. In terms of the superior performance of SUMVC over VAE-based method (i.e., \beta-VAE) as presented in Table 2, it can be attributed to several factors. Firstly, the SUMVC model is designed to capture both shared and view-specific representations, while VAE-based model only generates a single shared representation. This additional flexibility in SUMVC allows for better handling of multi-view data and contributes to its enhanced performance. 2. In SCMVC, the fusion representation $\overrightarrow{Z}$ is directly optimized, while it is not in SUMVC. Could this approach negatively impact the performance of SUMVC? Thank you for raising this valuable point. In the proposed SUMVC method, the fusion representation is indeed not directly optimized as it is in SCMVC. However, this approach is by design and serves a specific purpose in the context of our research. In SUMVC, the goal is to utilize the complementary information from different views without explicitly enforcing a shared representation. This allows SUMVC to maintain the distinctiveness of each view, which can be beneficial in cases where there is considerable heterogeneity across views. On the other hand, SCMVC directly optimizes the fusion representation to encourage greater interaction and integration between views. This approach is particularly suitable when there is a high level of consistency or overlap among the views. Thus, while the direct optimization of $\overrightarrow{Z}$ in SUMVC could theoretically enhance performance in some cases, it could also compromise the ability of the model to handle datasets where preserving the distinctiveness of each view is crucial. Therefore, the decision to not directly optimize $\overrightarrow{Z}$ in SUMVC was an intentional design choice made with these considerations in mind. We appreciate your attention to these details and your insightful question. We hope this explanation provides a clearer understanding of the design and intended applications of SUMVC and SCMVC. 3. The methods discussed in this article may not be effectively applied to datasets with high heterogeneity. We discussed in the Appendix that the improvement in performance is not very significant when heterogeneity is high compared to single-view clustering. For experimental results, please refer to the response to Reviewer 79CP. We believe that incorporating these additions will offer a more thorough understanding of the applicability and limitations of our methods. We value your critical feedback and are confident that it will contribute to strengthening our paper. 4. The introduction section contains some complex sentences. Simplifying the language and structure could improve clarity. Thank you for your valuable feedback. We will revise the introduction section and worked on simplifying the language and sentence structure. We will strive to break down complex sentences into more manageable parts and to convey our ideas in a straightforward manner without compromising the depth of our research. 5. It would be beneficial for the authors to provide a more comprehensive discussion of some missed information-based multi-view methods [2-5] in the related work section. Thank you for your thoughtful suggestion. We will these papers and added an enhanced review of these methods in the related work section of our manuscript. We will discussed their methodologies, findings, and how they relate to and contrast with our own work in the final version. The discussion of the methods mentioned by the reviewer is as follows: “Multi-view information-bottleneck representation learning” aims to develop a model that effectively explores the common latent structure and view-specific intrinsic information in multi-view data while discarding irrelevant information to enhance generalization capability. “Completer: Incomplete multi-view clustering via contrastive prediction” addresses two challenging problems in incomplete multi-view clustering analysis: learning an informative and consistent representation among different views without labels, and recovering missing views from the data. “Dual contrastive prediction for incomplete multi-view representation learning” provides a new perspective on the relationship between cross-view consistency learning and data recovery and propose a method that jointly addresses these challenges in multi-view representation learning. --- Rebuttal Comment 1.1: Comment: After thoroughly reviewing the feedback provided by both the other reviewer and the author's rebuttal, I am pleased to state that my initial concerns have been adequately addressed. I hope that these discussions can be of value to the authors in their efforts to enhance their work and I decided to raise my rating.
Summary: The paper proposed Sufficient Multi-View Clustering , SUMVC, which is composed of two main components. The first component is a simple and reliable multi-view clustering method called SCMVC (simple consistent multi-view clustering), which utilizes variational analysis to generate consistent information. The second component proposes a lower bound on sufficient representation, aiming to enhance consistent information and reduce unnecessary information among views. Strengths: - Analyze the effectiveness of multi-view clustering from the information-theoretic perspective is interesting and valuable. - The proposed method outperforms the state of the arts. The paper has good reproducibility with the provided codes. - The paper is very well written. The theoretical and empirical analyzes are convincing. Weaknesses: - There are few comparative methods, and more comparative methods need to be selected reasonably to illustrate the effectiveness of the proposed methods. - The theoretical analysis (based on the Bayes Error Rate) should be elaborated more. The key findings or insights from this analysis should also be emphasized. - Did the experimental results demonstrate any limitations or potential challenges of the proposed method? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It can be found in Appendix of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. There are few comparative methods, and more comparative methods need to be selected reasonably to illustrate the effectiveness of the proposed methods. We appreciate your suggestion. We have conducted additional comparisons with several other relevant methods, i.e., FMR (Flexible multi-view representation learning for subspace clustering, IJCAI 2019), LMVSC (Large-scale multi-view subspace clustering in linear time, AAAI 2020) and CSMSC (Consistent and specific multi-view subspace clustering, AAAI 2018). These methods are chosen based on their relevance, popularity, and the availability of implementation details, which ensure a fair comparison. We demonstrate the results using the multi-coil-10 dataset as an example. Again, our method outperforms all these methods. | Method | FMR | LMVSC|CSMSC| SCMVC |SUMVC| | --- | --- | --- | --- | --- | --- | | ACC | 78.1 | 63.8| 97.6 | 98.1|100.0| | NMI| 80.0 | 75.8| 96.2 | 96.7|100.0| | ARI | 70.6| 55.1| 94.9 | 95.8|100.0| 2. The theoretical analysis (based on the Bayes Error Rate) should be elaborated more. The key findings or insights from this analysis should also be emphasized. Thank you for your constructive feedback. The Bayes Error Rate, defined as the probability of misclassifying a data point when the true underlying distribution of the data is known, serves as a pivotal indicator of the performance of learning algorithms. It offers valuable insights into the effectiveness of the feature extraction conducted by our model. In particular, the Bayes Error Rate is an important index in understanding the overall performance of our model and reflects the impact of representations on the performance of downstream tasks. In response to this comment, we will expand our discussion of the Bayes Error Rate and its implications in the final version. We delve deeper into its theoretical underpinnings and provide a clearer connection between a lower Bayes Error Rate and the efficacy of the features extracted by our model for downstream tasks. 3. Did the experimental results demonstrate any limitations or potential challenges of the proposed method? Thank you for your insightful question. We discussed limitations in the Appendix. One limitation we found is the proposed model exhibits poor performance when applied to datasets that have significant heterogeneity across different perspectives. This could potentially impact the robustness and generalization of the model.
Summary: This work introduces a new approach called sufficient multi-view clustering (SUMVC) to improve clustering performance using multiple data sources. Existing methods often focus on acquiring consistent information while neglecting the issue of redundancy across multiple views. By contrast, the proposed SUMVC provides a promising solution to the problem of multi-view clustering and offers a new perspective for analyzing multi-view data. The effectiveness of the model is verified through theoretical analysis and experiments on multiple multi-view datasets, showing superior performance compared to other methods. Strengths: 1. By examining the multi-view clustering framework from an information-theoretic standpoint, SUMVC offers a promising solution to the problem of multi-view clustering and provides a new perspective for analyzing multi-view data. 2. The effectiveness of the proposed model is demonstrated through theoretical analysis based on the Bayes Error Rate and experiments on multiple multi-view datasets, highlighting its superior performance compared to other methods. Weaknesses: 1. This paper is lack of in-depth discussion or evaluation of the proposed approach's limitations. 2. The authors could provide more insights into the computational complexity of the proposed method and its scalability to large-scale multi-view datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper mentions that the superiority of SUMVC is demonstrated through experiments on multiple multi-view datasets. However, there is no detailed discussion or analysis provided on the characteristics of these datasets or how they were selected. What are the criteria for dataset selection, and how representative are these datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact of this work exists. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the review for the constructive comments and feedback. Below we provide our responses to the key questions made by the reviewer. 1. The paper mentions that the superiority of SUMVC is demonstrated through experiments on multiple multi-view datasets. However, there is no detailed discussion or analysis provided on the characteristics of these datasets or how they were selected. What are the criteria for dataset selection, and how representative are these datasets? Thank you for your insightful comments. We appreciate your interest in the dataset selection process for our experiments. The datasets used in our experiments, including Multi-COIL-10 (K = 10), Multi-COIL-20 (K = 20), Multi-MNIST, and Multi-Fashion, were chosen based on several criteria. These criteria include the presence of multi-view data, variety in the type of data (ranging from object images in different poses to handwritten digits by different individuals), and their public availability and widespread use in the research community. More specifically: • In Multi-COIL-10 and Multi-COIL-20, different views of an object correspond to various poses, but retain the same label. This allows for an assessment of how well the SUMVC can handle variations in perspective. • In Multi-MNIST, different views of a digit represent the same digit written by different individuals, testing the SUMVC's ability to recognize the same object despite stylistic differences. • In Multi-Fashion, different views of a product category signify different fashionable designs for that category, challenging the SUMVC to identify similar categories despite variations in design. These datasets have been extensively used in the field, which allows for a reliable comparison of our method with existing ones. Moreover, the diversity of these datasets ensures a robust evaluation of our method across different types of multi-view data. These Multi-view datasets are commonly used for evaluating MVC methods and widely applied in MVC research papers, such as “MCoCo: Multi-level Consistency Collaborative Multi-view Clustering”, “Contrastive multi-view hyperbolic hierarchical clustering” and “Deep safe incomplete multi-view clustering: Theorem and algorithm”. 2. This paper is lack of in-depth discussion or evaluation of the proposed approach's limitations. Thank you for your constructive feedback. We will add a new paragraph in the Conclusion section to address this issue. In this section, we critically analyze the limitations of our approach, discuss possible scenarios where our method may not perform optimally, and outline practical considerations for researchers and practitioners intending to adopt our methodology. Please see the details below: The heterogeneity can make it more difficult for the VAE to learn a meaningful latent representation of the data. When the views are highly dissimilar such as views of BDGP, it may be challenging for the VAE to find a shared low-dimensional representation that captures the important features of both views. This can lead to suboptimal performance and poor reconstruction quality. 3. The authors could provide more insights into the computational complexity of the proposed method and its scalability to large-scale multi-view datasets. Thank you for your insightful suggestion. We will include a new section in our manuscript entitled "Computational Complexity Analysis." In this section, we discuss the algorithmic complexity of our method, outline the key factors that influence its computational cost, and analyze its scalability with respect to the size of the dataset and the number of views. Please see the details below: Assuming $T_1$ represents the number of iterations for training SCMVC, $T_2$ represents the epochs training SUMVC based on SCMVC, $l$ represents the dimensionality of the embedding for each view, $n$ is the number of instances and $V$ is the number of views. Then the whole training process requires $O(V n T_1) $ to train the variational autoencoders via $L_{con}$, $O(V(V-1) nlT_2) $ to calculate the $L_{suf}$. Overall, the time complexity of the model is $O(Vn((V-1)l T_2+ T_1 ))$. Similar to commonly used deep MVC methods, the computational complexity of our approach is also linear to the data size, making it easier to apply for large-scale MVC data clustering. --- Rebuttal Comment 1.1: Title: I have read the other reviewer's comments and the author's rebuttal. I will keep my original score. Comment: I have read the other reviewer's comments and the author's rebuttal. I will keep my original score.
Summary: This paper considers the problem of multi-view clustering from an information theoretic perspective. It focuses on representation learning, and optimizes said representation to improve down-stream clustering performance (with k-means). It introduces an Information Bottleneck based loss function, which considers consistency between views, redundancy and sufficiency of representations in addition to the traditional likelihood based reconstruction loss. It introduces 2 methods SCMVC and SUMVC based on some or all of these additional loss terms. The paper then analyzes the model effectiveness by establishing a connection to the Bayes Error Rate, and also shows good experimental results across some multi-view datasets. Strengths: - Introduces a novel approach for representation learning for Mmulti-View Clustering (MVC) based on information theoretic criteria. - Proposes two separate methods, both of which show good experimental performance. - Has mathematical rigor in both breaking down the loss function, as well as in the theoretical analysis that follows. - Compares against multiple state-of-the-art MVC methods and shows superior performance on the chosen datasets. - Conducts a wide range of experiments, including ablation studies and parameter sensitivity analysis. Weaknesses: - The presentation of the mathematical parts of the preliminaries, discussion and analysis is quite opaque. Variables and terms are often explained only after they have already been used in equations. The equations tend to be quite cluttered and difficult to parse. - The datasets chosen are less than ideal, and are mostly derivative from single-view datasets. There are other multi-view datasets such as NUS-WIDE and 3-Source News which are potential candidates for publicly available MV datasets. - The assumptions (and thus applicability) of the methods are restrictive. I.e. assuming mutual redundancy across all views. As they mention in the supplementary, their methods perform poorly on heterogeneous MV data, limiting their applicability. - The paper was not self-contained; the supplementary material was essentially required to understand the a lot of the details. For eg. the limitations of the methods were only mentioned in the supplementary material. The supplementary material must not be used as additional pages, and the paper itself should be able to stand alone. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall questions: - This paper is more about representation learning for clustering than clustering itself, right? Similar to spectral clustering. Not that that's a bad thing, it's just a little misleading. You spend a lot of the paper talking about MVC but the actual clustering is just k-means on top of a learned representation. It might help to be clear about this up front. - As I mentioned in the second point of the Limitations, it seems like this method is also weak to only partial information existing in views. How would you remedy this? Would ensemble variational methods work here as well? - For the MNIST experiments, I am confused about your choice to use pairs of the same digit (but written by different people) as two views. The data distribution of both the views are very similar, then. Also, if you pair up digits, shouldn't your dataset size be 35000 (halved) and not 70000? Or are you using both (A, B) and (B, A) for each pair? If so, this doesn't really seem like a good multi-view dataset since the views themselves are basically indistinguishable overall. - In table 3, the 3rd row seems to not have any useful information. It's clear that $L_{rec}$ would be required since you're using an auto-encoder. Instead, maybe you can have $L_{rec} + L_{suf}$ which you don't have here. That would be interesting to see. I will leave line-by-line comments here (since there doesn't seem to be a better place for this). There are a few typos here and there, but I won't bother too much with those: - [Line 32] Did you mean maximizing MI between representation* and output (not input and output) - [Line 82] "... to quantify amount of" -- sentence is not complete. - [Section 3] You use y, z without explaining/defining what they are. Clustering is an unsupervised task, so what does this mean in this context? I'm guessing it means cluster assignment but it isn't clear. - [Line 164] I may be wrong but isn't $\phi^i$ the generational parameter set, and not $\theta^i$? - [Line 180] What is a "pseudo-label?" A cluster assignment? You should define this earlier. - [Line 164] What do you mean by unique distribution? - [Line 186] This equation is very hard to parse. For cleaner appearance, you should consider using \left( and \right) to have larger parantheses. You could also remove the superscripts just for these long equations where there is no $j$, and just leave a note below. Also, shouldn't there be a conditioning on y in the first term? Lastly, $KL$ -> $D_{KL}$. - [Line 198] maximizing* - [Equation 9] There should be no expectation here, right? The $D_{KL}$ absorbs it, I think. Also, for the second term. the first distribution uses $z^j$ instead of $z^i$. - [Line 243] Should it be $z^j_m$ here? Also, what is $P^{\otimes n}$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - [Author mentioned] Heterogeneity in the data (eg. in terms of dimensions of features) affects performance significantly. - Restrictive assumptions are made on mutual redundancy between views. Multi-view data often has only partial information available in each view. I.e. you might need more than one view to get the complete picture. This also seems like a weakness of the methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the thoughtful comments and feedback. Below please find our detailed responses to the questions. 1. This paper is more about representation learning for clustering than clustering itself, right? We appreciate your insightful observation. You're correct in noting that our paper primarily focuses on the aspect of representation learning for clustering, which is analogous to spectral clustering. The use of the k-means method on top of the learned representation is indeed a subsequent step, and the Multi-View Representation Learning (MVRL) method forms the core of our discussion. In response to your feedback, we will revise our manuscript to make this point more explicit upfront. We believe this adjustment better frames the main contributions of our work and minimizes any potential misunderstandings about the paper's focus. Thank you again for pointing out this nuance. 2. As I mentioned in the second point of the Limitations, it seems like this method is also weak to only partial information existing in views. How would you remedy this? Would ensemble variational methods work here as well? Thank you for your insightful comments. We appreciate your mention of the issue of partial information existing in views, which indeed is a challenge that our method currently faces. The original intention of MVC is that a single view only contains partial information, and it is difficult to depict the complete clustering structure. MVC combines complementary information from multiple views to achieve better clustering results. If "partial information" refers to missing data, one possible strategy could involve integrating additional data preprocessing steps, such as imputation methods for handling missing data, which could potentially enhance the robustness of our method when dealing with partial views. Regarding your suggestion of using ensemble variational methods, we believe it's a very promising direction. Ensemble methods could indeed provide a solution to this issue by leveraging the consensus among multiple models, each trained on a different subset of the data. This approach could help to address the inherent uncertainty and variability in the data, and thus potentially improve the performance of our method when dealing with partial views. We will follow your suggestion to add a discussion about these potential strategies to the manuscript. We believe that this addition will stimulate further research on this topic and provide a roadmap for improving the current limitations of our method. We hope this adequately addresses your concerns. We greatly value your suggestions and look forward to any further feedback you may have. 3. For the MNIST experiments, I am confused about your choice to use pairs of the same digit (but written by different people) as two views. The data distribution of both the views are very similar, then. Also, if you pair up digits, shouldn't your dataset size be 35000 (halved) and not 70000? Or are you using both (A, B) and (B, A) for each pair? If so, this doesn't really seem like a good multi-view dataset since the views themselves are basically indistinguishable overall. Thank you for your thoughtful feedback, which will undoubtedly improve the quality of our paper. The MNIST-MV data we used in the experiments is a public available dataset and has been widely used in previous multi-view studies, e.g., "Deep safe multi-view clustering: Reducing the risk of clustering performance degradation caused by view increase, CVPR 2022", "Multi-VAE: Learning Disentangled View-common and View-peculiar Visual Representations for Multi-view Clustering, ICCV 2021", and "Multi-view Semantic Consistency based Information Bottleneck for Clustering, arxiv 2023". Regarding our choice to use pairs of the same digit written by different people as two views, the rationale behind this decision was to examine the ability of our model to identify and learn from subtle differences in similar-looking data. While the two views may seem indistinguishable at a macro level, they can contain minute differences at a micro level. These differences are due to the unique handwriting styles of different individuals, which our model aims to capture and learn from. As for your question about the dataset size, you are correct in your understanding. We indeed used both (A, B) and (B, A') pairs (A' may not be equal to A), effectively maintaining the original dataset size of 70,000. The reason behind this approach was to increase the diversity of our training data and further test the robustness of our model. However, we understand your concern about the potential impact on the multi-view nature of the dataset. In light of your comments, we will ensure that our choices are properly justified in the revised manuscript. 4. In table 3, the 3rd row seems to not have any useful information. It's clear that $L_{rec}$ would be required since you're using an auto-encoder. Instead, maybe you can have $L_{rec} +L_{suf} $ which you don't have here. That would be interesting to see. Thank you for your insightful comments. We will remove the 3rd row to avoid any confusion and the results of the ablation experiment with the inclusion of $L_{rec} + L_{suf}$ can be found in the table below. |Dataset | Multi-MNIST | Multi-Fasion|Multi-COIL-20 | |---|---|---|---| | ACC| 98.4| 84.6 |86.9| | NMI | 96.0| 80.8 |91.0| | ARI | 96.6| 75.2 |83.1| We found that model with only the $L_{rec} + L_{suf}$ terms perform worse than SUMVC. This is because $L_{suf}$ helps the model learn the distributional features of the latent layer. Therefore, the lack of this constraint makes it challenging for the model to effectively learn these features. We hope these adjustments and explanations address your concerns satisfactorily. --- Rebuttal Comment 1.1: Comment: I acknowledge the author's comments, and have gone through the other reviewers' feedback. The authors' responses have answered my questions well. I believe that the presentation of the paper will be much clearer after incorporating the changes they mentioned in their rebuttal. I hope that weakness #4 above will also be addressed in their changes to the manuscript (i.e. paper not being self-contained and needing the appendix to really understand it). My main remaining concern is that the evaluations/experiments are not conducted on natural multi-view datasets, and rather on modified single-view datasets. While I understand that MNIST-MV is commonly used in literature, lacking experimental evaluations on natural multi-view datasets detracts from the impact of the contribution. It seems that most of the reviewers have similar concerns on the experimental evaluations. If additional experiments on other datasets have been conducted since then, I would like to see those evaluations as well. At this point, I intend to keep my original score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer NC94, We appreciate your response to our rebuttal and the additional questions raised. Regarding the evaluations being conducted on modified single-view datasets rather than natural multi-view datasets, we’d like to clarify that for the MULTI-COIL-10 and MULTI-COIL -20 datasets, they are not derived from single-view data, and each view represents different angles at which the photos were taken. To further address your concerns, we have conducted additional experiments using two widely used multi-view datasets: the REU dataset and the HW dataset. We have also introduced five additional comparison methods, namely IDEC (Improved Deep Embedded Clustering), CSMSC (Consistent and Specific Multi-View Subspace Clustering), FMR (Flexible Multi-View Representation Learning for Subspace Clustering), GMC (Graph-Based Multi-View Clustering), and CGMSC (Multi-View Subspace Clustering with Adaptive Locally Consistent Graph Regularization). The results are presented in the tables below. | Method (REU)| IDEC| SAMVC | GMC | DEMVC | SUMVC | | --- | --- | --- | --- | --- | --- | | ACC | 46.0|18.8 | 19.8|46.7|58.3 | | NMI| 25.2|4.6 | 13.8| 25.3 |60.0| | ARI | 18.0| 0.3| 1.3| 20.4|47.5| | Method (HW)| SAMVC | FMR |CSMSC | CGMSC | FMVACC | SUMVC | | --- | --- | --- | --- | --- | --- | --- | | ACC | 76.4 | 86.1| 89.8 |69.1|89.5| 96.4 | | NMI| 84.4 | 76.5|83.0 |81.8|86.0|93.2| | ARI | 73.9| 72.6| 79.5|69.5|85.0|92.4| It can be observed that our method performs well on these datasets. We will add new comparative methods and datasets in the final version. We hope our response has addressed your concerns.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: A consistent variational lower bound is provided to explore the consistent information among views for multi-view clustering, based on which SCMVC (simple consistent multi-view clustering) is proposed. To enhance consistent information and minimize unnecessary information among views, a sufficient representation lower bound is further proposed. Strengths: (1) The main idea is novel, necessary preliminary knowledge and sufficient theoretical analysis is given. (2) Experiments conducted on real multi-view data demonstrate the good performance of proposed methods. The codes are available. (3) The findings of this study have the potential to contribute to the advancement of multi-view clustering techniques. Weaknesses: (1) It is said the proposed model does not perform well on datasets with strong heterogeneity between views, but no evidence or experiments support this statement. How does the heterogeneity affect the performance should be explained. (2) The paper shows quantitative results of the proposed SUMVC and SCMVC. However, I expect some visualization results to show the difference between these two methods. (3) A thorough analysis of potential drawbacks, and practical considerations would enhance the overall strength of the paper. (4) The tested data sets in this paper contain a small number (<=3) of views. It is suggested to add data sets with more than three views for discussion. (5) The writing should be further improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see ‘Weaknesses’. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and feedback. Below, we provide our responses to the key questions that were raised by the reviewer. 1. It is said the proposed model does not perform well on datasets with strong heterogeneity between views, but no evidence or experiments support this statement. How does the heterogeneity affect the performance should be explained. Thank you for your insightful comment. We have conducted additional experiments on the BDGP dataset to substantiate our initial statement. BDGP contains 5 different types of drosophila. Each sample has visual and textual views. It has high heterogeneity between views. To mitigate the heterogeneity issue, we explored the use of a Multilayer Perceptron (MLP) to harmonize dimensions. We also tested the efficacy of sharing parameters between views. Both strategies have shown promise in alleviating the performance decrement in heterogeneous datasets. We made a comparison between four models: (1) SCMVC-NS (not-share): the original SCMVC without consideration of heterogeneity between views, (2) SCMVC-MLP: the improved SCMVC using a single layer MLP as a consideration of unifying dimensions and sharing parameters between views, (3) SUMVC-NS: the original SUMVC without consideration of heterogeneity between views, and (4) SUMVC-MLP: the improved SUMVC using a single layer MLP as a consideration of unifying dimensions and sharing parameters between views. | Method | SCMVC-NS | SCMVC-MLP | SUMVC-NS| SUMVC-MLP |SAMVC |FMVACC | | --- | --- | --- | --- | --- | --- | --- | | ACC | 49.9 | 55.8| 55.3 |71.4|51.3|58.6| | NMI| 44.8 | 48.6| 39.9 |60.3|45.2|36.8| | ARI | 29.6| 35.1| 27.3|45.3|19.6|44.3| The experimental outcomes demonstrate that while our model does not reach the ideal level on the BDGP dataset with strong heterogeneity between views, it delivers a comparable performance to the existing models, i.e., (Fast multi-view anchor-correspondence clustering, NeurIPS 2022) and SAMVC (Self-paced and auto-weighted multi-view clustering, NC 2020). We believe these additional experiments and proposed solutions provide a clearer understanding of our model's behavior under varying degrees of heterogeneity. We hope this addresses your concerns satisfactorily and provides a more comprehensive view of our work. 2. The paper shows quantitative results of the proposed SUMVC and SCMVC. However, I expect some visualization results to show the difference between these two methods. Thank you for your insightful suggestion. We agree that providing visual results could further illustrate the differences between the SUMVC and SCMVC methods, in addition to the quantitative results we have already provided. To this end, we will provide additional visualizations to better elucidate the distinctions between the two methods. 3. A thorough analysis of potential drawbacks, and practical considerations would enhance the overall strength of the paper. Thank you for your constructive feedback. We discussed potential drawbacks in the Appendix: our model does not perform well on datasets with particularly strong heterogeneity between views, such as huge differences in dimensions of different views. We will add a new paragraph in the Conclusion section to address this issue. In this section, we critically analyze the limitations of our approach, discuss possible scenarios where our method may not perform optimally, and outline practical considerations for researchers and practitioners intending to adopt our methodology. Please see the details below: The heterogeneity between views (i.e., the distributions of different views are obviously different) can make it more difficult for the VAE to learn a meaningful latent representation of the data. When the views are highly dissimilar such as views of BDGP, it may be challenging for the VAE to find a shared low-dimensional representation that captures the important features of both views. This can lead to suboptimal performance and poor reconstruction quality. We believe that this addition will provide the reader with a more comprehensive understanding of our work, and more importantly, it will stimulate further research to overcome the highlighted limitations. For practical considerations, multi-view clustering has broad prospects in the fields of data analysis and pattern recognition. The main idea behind multi-view clustering is to utilize multiple data sources or feature sets to perform clustering analysis, thereby improving the accuracy and robustness of clustering results. Multi-view clustering can be applied in various domains such as bioinformatics, social network analysis, image processing, and text mining. In bioinformatics, multi-view clustering can be used for the analysis of gene expression data, combining different experimental platforms and data types to discover more accurate gene expression patterns and biological features. In social network analysis, multi-view clustering can integrate users' social relationships, interest tags, and behavioral data to achieve more refined user segmentation and community discovery. 4. The writing should be further improved. We appreciate your feedback on our writing. We will review and revise our manuscript. We focus on improving the clarity, coherence, and conciseness of our writing, ensuring our arguments are well-structured, and that our language is precise. --- Rebuttal Comment 1.1: Title: I Comment: Dear authors, Thanks for your detailed reply. My concerns have been addressed by your response. The addtional emperiments comparsions are hoped to add in the revised version. I would like to increse my scores into 'weak accept'.
null
null
null
null
null
null
Best Arm Identification for Stochastic Rising Bandits
Reject
Summary: Stochastic Rising Bandits (SRBs) model sequential decision-making problems in which the expected reward of the available options increases after every time they are selected. While previous works addressed the regret minimization problem, this paper studied the fixed-budget Best Arm Identification (BAI) problem for SRBs. This work proposed the R-UCBE and R-SR algorithms and showed that these two algorithms with classical designs achieve a small failure probability when the time horizon is sufficiently large. With a lower bound, the author(s) also showed that the R-SR algorithm is near-optimal and a sufficiently large horizon is unavoidable for any algorithm to perform well. Lastly, experiments are provided to validate the empirical performance of BAI algorithms. =========== The score is increased after I read the response from authors. Strengths: 1. This work provides clearly formulates the rising bandit problem in Section 2. 2. Before presenting the R-UCBE and R-SR algorithms, the author(s) describe how to estimate the expected rewards of arms in Section 3, which actually imply the intuition of algorithm designs. This can inspire readers to propose efficient bandit algorithms even beyond this rising bandit setting. 3. In Section 7, experiments are provided to validate the empirical performance of BAI algorithms. Weaknesses: 1. In Theorem 6.1, the lower bound on the time horizon $T$ depends on $\Delta_i(T)$. The quantity, $\Delta_i(T)$, depends on the instance, horizon $T$ and also the algorithm that we apply. Hence, $\Delta_i(T)$ seems to be a random variable, and I don't think a random variable should appear in the lower bound. Moreover, we usually expect a lower bound holds true for many algorithms, and the term $\Delta_i(T)$ seems to be different for different algorithms. or even different for the same algorithm in different trials. 2. I have a similar concern for the upper bounds on the failure probabilities of the R-UCBE and R-SR algorithms. I think the contribution of this paper is much clearer, if my concerns above can be resolved. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My major concerns as listed in the *Weakness* section. Some other suggestions are as below: 1. Line 45: 'failing to represent' may be revised to be 'but failed to represent' 2. At the bottom of Page 2, it states that 'A complete discussion of the related works is available in Appendix A. Additional motivating examples are discussed in Appendix B.' I think these are important components of the paper, and at least a brief version should be included in the main paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our work and for the interesting comments. Below, we address the concerns of the Reviewer. ## Weaknesses > In Theorem 6.1, the lower bound on the time horizon $T$ depends on $\Delta_i(T)$. The quantity, $\Delta_i(T)$, depends on the instance, horizon $T$ and also the algorithm that we apply. Hence, $\Delta_i(T)$ seems to be a random variable, and I don't think a random variable should appear in the lower bound. Moreover, we usually expect a lower bound holds true for many algorithms, and the term $\Delta_i(T)$ seems to be different for different algorithms. or even different for the same algorithm in different trials. > I have a similar concern for the upper bounds on the failure probabilities of the R-UCBE and R-SR algorithms. We are happy to clarify the concern raised by the Reviewer. Given a specific time budget $T$ (which is an input of the fixed-budget BAI problem) and given an instance of the stochastic rising bandit (i.e., the expected rewards $\mu_i(t)$ functions), the values of the suboptimality gaps $\Delta_i(T) := \mu_{i^*(T)}(T) - \mu_i(T)$ are well-defined. They are (1) **algorithm-independent** (being defined through $T$ and $\mu_i(t)$ irrespective of the used algorithm) and (2) they are **not random variables** (since $T$ and $\mu_i(t)$ are deterministic). Thus, our lower bound (Theorem 6.1) depends on $\Delta_i(T)$ which, in turn, depends on the time budget $T$ and on the instance of the SRB (i.e., $\mu_i(t)$), as common in the BAI literature ([1], [2] and [3]). Indeed, **the lower bound holds for every algorithm run with a sufficiently large time budget (Theorem 6.1)**. Similarly, the upper bounds on the failure probabilities of the R-UCBE and R-SR algorithms depend on the same quantities. We will clarify these points in the final version of the paper. ## Questions > Line 45: 'failing to represent' may be revised to be 'but failed to represent'. We thank the Reviewer for pointing it out, we will adjust this sentence in the final version of the paper. > At the bottom of Page 2, it states that 'A complete discussion of the related works is available in Appendix A. Additional motivating examples are discussed in Appendix B.' I think these are important components of the paper, and at least a brief version should be included in the main paper. We will include these parts in the final version of the paper exploiting the additional page. --- [1] Audibert, Jean-Yves, Sébastien Bubeck, and Rémi Munos. "Best arm identification in multi-armed bandits." COLT. 2010. [2] Carpentier, Alexandra, and Andrea Locatelli. "Tight (lower) bounds for the fixed budget best arm identification bandit problem." Conference on Learning Theory. PMLR, 2016. [3] Kaufmann, Emilie, Olivier Cappé, and Aurélien Garivier. "On the complexity of best arm identification in multi-armed bandit models." Journal of Machine Learning Research 17 (2016): 1-42. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. The score is increased. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for taking the time to read our response. If the Reviewer has any other concerns about our work, we will be happy to clarify them.
Summary: The paper studies the fixed-budget best arm identification (BAI) under the stochastic rising bandit (SRB) problem. The stochastic rising bandit is to assume that the mean reward will increase as one plays the arm more. By assuming a concave increasing reward, the paper provides upper bounds for two algorithms: R-UCBE (which is UCB-type) and R-SR (which is elimination-based). It further provides lower bound proof, and such a bound is matched by R-SR up to logarithmic factors. Numerical experiments are listed. Strengths: - The paper is well-written and carefully organized - Matching lower and upper bounds (up to logarithmic factors) Weaknesses: 1) Deterministic growth function $\gamma$, meaning that the randomness of $\gamma$ does not accumulate; Not the practical case (consider SGD, former parameters will affect the consecutive) 2) What if $c \rightarrow 0$? The requirement for $T$ (10) seems to vanish. Can you make further explanations? Can you recover the bound of Audibert et al. (2010)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Na. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on the review and for appreciating our work. Below, we address the concerns of the Reviewer. ## Weaknesses > Deterministic growth function $\gamma$, meaning that the randomness of $\gamma$ does not accumulate; Not the practical case (consider SGD, former parameters will affect the consecutive) We agree with the Reviewer that the proposed setting (motivated by SGD) is of practical interest. However, our work is framed in the established Stochastic Rising Bandit (SRB) setting of [1], which considers a noise model where the stochasticity in the observed reward does not accumulate. Indeed, considering the setting the Reviewer proposes (i.e., accumulating noise) would require to account for the *statistical dependence of consecutive observed rewards*. We agree that incorporating this accumulating noise model would make the setting closer to practical applications. However, we believe this would imply further technical challenges that would deserve a re-definition of the setting which, in our opinion, is out of the scope of the present paper. > What if $c \rightarrow 0$? The requirement for $T$ (10) seems to vanish. Can you make further explanations? Can you recover the bound of Audibert et al. (2010)? The term $c$ appears in Assumption 2.2, which requires a bound on the growth rates of the arms, i.e., $\gamma_i(n) \leq cn^{-\beta}$. Thus, when $c \rightarrow 0$, the arms have an expected reward that does not change with the number of pulls, i.e., $\gamma(t) \rightarrow 0$ for all $t \in [T]$. Setting $c\rightarrow 0$ has different effects on R-UCBE and R-SR. * **R-UCBE**: the minimum admissible time budget becomes $T \ge (K-1)^3$ (since $c\rightarrow 0$ we can freely select $\beta > 3/2$), depending on the number of arms $K$ only (Eq. 10). Our error probability bound becomes $2TK \exp \left(-\frac{\varepsilon^3}{40\sigma^2}\left(\frac{T^{1/3} - (K-1)}{H_{1, 2/3}(T)}\right)^3 \right)$ and does not correspond to that of [2]. This is because, differently from [2], we are using the *optimistic estimator* which involves the estimate of the increment, leading to looser concentration guarantees compared to the standard sample mean (see Lemma 3.2). A similar phenomenon is present in the original paper [1] for regret minimization, where the regret bound (Theorem 4.4) remains of order $T^{2/3}$ even for the stationary case, while $\sqrt{T}$ could be achieved by using the sample mean as an estimator. Note that using the *pessimistic estimator* in R-UCBE is not sound for the rising setting, but it is for the stationary setting, leading to guarantees analogous to that of [2]. * **R-SR**: here, instead, the minimum admissible time budget requirement vanishes (Eq. 12 becomes simply $T>0$), as in [2]. Moreover, our error probability bound becomes $\frac{K(K-1)}{2}\exp\left( -{\color{black}{\frac{\varepsilon}{8 \sigma^2}}} \cdot \frac{T-K}{ \overline{\log} (K) H_2 }\right)$, matching the error probability of [2] apart from the constant term $\frac{\varepsilon}{8\sigma^2}$ deriving from the use of a windowed estimator (window of size $\lfloor \epsilon N_{i,t-1} \rfloor$) and the analysis based on $\sigma^2$-subgaussian rewards (instead of $[0,1]$-bounded rewards as in [2]). We will add a discussion on this in Sections 4 and 5 of the final version of the paper. --- [1] Metelli, A. M., Trovo, F., Pirola, M., & Restelli, M. Stochastic rising bandits. In International Conference on Machine Learning. 2022. [2] Audibert, J. Y., Bubeck, S., & Munos, R. Best arm identification in multi-armed bandits. In Conference on Learning Theory. 2010.
Summary: The paper is about bandit best arm identification with fixed budget, in a non-stationary setting. This is a rested bandit problem: the mean reward of an arm changes each time it is pulled, but does not change when it is not pulled. The main assumption is that the mean reward is a non-decreasing, concave function of the number of pulls. Some results also use another assumption: an upper bound on the increments. The authors introduce estimators of the mean rewards that are adapted to that setting and use them in two algorithms R-UCBE and R-SR, which are inspired by the UCBE and SR fixed budget algorithms. The paper contains upper bounds on the error probability of these algorithms as well as lower bounds on the error probability of any algorithm and a discussion of the minimal budget necessary to identify the best arm. Strengths: The rising bandit problem is important: it corresponds to allocating resources to different learning algorithms, in order to identify the one with best performance once fully trained. The estimators and algorithms are well explained and motivated. The graphical representation of figure 1 is very helpful. The lower and upper bounds show that the methods are close to optimal for the problem. The discussion of the minimal budget necessary to identify the best arm is interesting and highlights a feature of rising bandits which is not present in standard BAI. The experimental evaluation is convincing. Weaknesses: R-UCBE depends on a parameter that needs to be tuned using unavailable information, but that theoretical weakness is directly inherited from UCBE and the practical performance of the algorithm is very good. Hence this is a very mild weakness. The lower bound of theorem 6.2 is of order $\exp(-T/H)$, while a famous feature of fixed budget BAI (in the stationary setting) is that this is not achievable. Indeed, it is shown in [Carpentier and Locatelli, Tight (lower) bounds for the fixed budget best arm identification bandit problem, Colt 2016] that there is a lower bound of order $\exp(-T/(H \log K))$, matching the upper bound of SR. In light of that lower bound, we would expect a similar result for rising bandits, stronger than Theorem 6.2. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is it possible to strengthen the lower bound? Why nondecreasing and concave mean rewards? It should be easy to get large lower bounds if assumption 2.1 is not satisfied. Could you point to such results, in order to justify the assumption? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are adequately discussed. No concern about a negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on the review and for appreciating our work. Below we provide the responses to the Reviewer's concerns. ## Weaknesses > R-UCBE depends on a parameter that needs to be tuned using unavailable information, but that theoretical weakness is directly inherited from UCBE and the practical performance of the algorithm is very good. Hence this is a very mild weakness. We agree with the Reviewer that this parameter is challenging to set in a theoretically-sound way, even if this problem, as the Reviewer noted, directly derives from UCBE. Aware of this issue, we conducted a **sensitivity analysis on parameter $a$** evaluating the error rate when we misspecify it. In Figure 4, we show that using values of $a$ with different magnitudes w.r.t. the optimal value $a^*$ still provides good performance for the R-UCBE. Noteworthy, R-SR is designed (as well as the original SR in the paper from [1]) with the goal to overcome the need for setting this parameter. > The lower bound of theorem 6.2 is of order $\exp(-T/H)$, while a famous feature of fixed budget BAI (in the stationary setting) is that this is not achievable. Indeed, it is shown in [Carpentier and Locatelli, Tight (lower) bounds for the fixed budget best arm identification bandit problem, Colt 2016] that there is a lower bound of order $\exp(-T/(H \log K))$, matching the upper bound of SR. In light of that lower bound, we would expect a similar result for rising bandits, stronger than Theorem 6.2. >Is it possible to strengthen the lower bound? We thank the Reviewer for raising this interesting point. We remark that our construction for the lower bound is based on the work [3] (Theorem 17) which presents a result compatible with ours (without the $\log K$ term). Nevertheless, we point out that our construction exploits the instances of Figure 5 and, specifically, the fact that the dissimilarity between the instances increases as the time $t$ increases. Consequently, our lower bound is derived for the simpler case in which they both achieved the "regime" behavior (derivation at line 713 when upper bounding the KL divergence at time $t$ with the one at time $T$). In other words, ours is a lower bound that holds for the stationary bandit we would have always considered $\mu_i(T)$ as the expected reward. Therefore, we believe that the proof of (Carpentier and Locatelli, 2016) can be integrated into our setting with no further challenges leading to the additional $\log K$ term. We will complement the lower bound analysis in the final version of the paper. ## Questions >Why nondecreasing and concave mean rewards? It should be easy to get large lower bounds if Assumption 2.1 is not satisfied. Could you point to such results, in order to justify the assumption? Without such assumptions (*non-decreasing* and *concave* expected rewards) the error probability cannot be guaranteed to be decreasing as a function of the budget $T$. From an intuitive perspective, this is similar to what happens for regret minimization in [2, Theorem 4.2], in which the authors demonstrate the non-learnability (i.e., an $\Omega(T)$ regret lower bound) when these two assumptions do not hold. From a technical perspective, we can easily show that the error probability no longer depends on $T$ when we just remove the **concavity assumption**. We consider two Gaussian bandits with unit variance. Let $\boldsymbol\nu$ be a 2-armed bandit with expected rewards $\mu_1(t) = 1/2$ and $\mu_2(t) = 3/4$, both $\forall t \in [T]$, thus its optimal arm is $i^*_{\boldsymbol\nu}(T)=2$ (we recall that the optimal arm in our setting is the one having the highest expected reward at $T$). Let $\boldsymbol\nu'$ be a 2-armed bandit with expected values $\mu_1(t) = 1/2 \; \forall t < T$, $\mu_1(t) = 1$ if $t=T$ and $\mu_2(t) = 3/4 \; \forall t \in [T]$, thus the optimal arm is $i^*_{\boldsymbol\nu'}(T)=1$. Notice that bandit $\boldsymbol\nu'$ violates the concavity assumption. Now, applying the **Bretagnolle-Huber** inequality, we have that: $$ \begin{aligned} \\max\\{\\text{Pr}\_{\\boldsymbol\\nu}(\\hat{I}(T) \\neq 2), \\text{Pr}\_{\\boldsymbol\\nu'}(\\hat{I}(T) \\neq 1) \\}& \\ge \\frac{1}{4} \\exp\\left( -\\mathbb{E}\_{\\boldsymbol\\nu} \\left[\\sum\_{t=1}^T D\_{\\text{KL}}(\\boldsymbol\\nu\_{I\_t}(N\_{I\_t,t}), \\boldsymbol\\nu'\_{I\_t}(N\_{I\_t,t})) \\right]\\right)\\\\ & \\ge \\frac{1}{4} \\exp\\left( - D\_{\\text{KL}}(\\boldsymbol\\nu\_{1}(T), \\boldsymbol\\nu'\_{1}(T)) \\right) \\\\ & = \\frac{1}{4} \\exp\\left( - \\frac{1}{8} \\right), \end{aligned} $$ where $\hat{I}(T)$ is the arm recommended at time $T$ and having observed that $D\_{\\text{KL}}(\\boldsymbol\\nu\_{I\_t}(N\_{I\_t,t}), \\boldsymbol\\nu'\_{I\_t}(N\_{I\_t,t})) = 0$ if $t < T$ regardless the arm $I\_t \in \\{1,2\\}$ and that $D\_{\\text{KL}}(\\boldsymbol\\nu\_{I\_T}(N\_{I\_T,T}), \\boldsymbol\\nu'\_{I\_T}(N\_{I\_T,T})) \\le D\_{\\text{KL}}(\\boldsymbol\\nu_{1}(T), \\boldsymbol\\nu'_{1}(T)) = 1/8$. The last line shows a lower bound on the error probability that is budget-independent, thus, such a setting, obtained by removing the concavity assumption, is non-learnable. We will add this result in Section 6 of the final version of the paper. --- [1] Audibert, J. Y., Bubeck, S., & Munos, R. Best arm identification in multi-armed bandits. In Conference on Learning Theory. 2010. [2] Metelli, A. M., Trovo, F., Pirola, M., & Restelli, M. Stochastic rising bandits. In International Conference on Machine Learning. 2022. [3] Kaufmann, Emilie, Olivier Cappé, and Aurélien Garivier. "On the complexity of best arm identification in multi-armed bandit models." Journal of Machine Learning Research 17 (2016): 1-42.
Summary: This study explores the stochastic rising bandits (SRB) in fixed-budget best arm identification (BAI). The authors initially formulate this novel problem setting and then introduce two types of estimators. For these estimators, they demonstrate upper bounds that match their lower bounds. Lastly, they validate the reliability of their approach through various experiments. Strengths: Firstly, this is my initial exposure to a paper on the SRB setting. Consequently, I'm not sure if the setting is truly novel. However, I am persuaded that the setting is both crucial and intriguing, particularly given the practical importance of the CASH problem. As I'm unable to gauge the novelty of the setting, my remarks are primarily oriented towards the technical aspects of BAI. Firstly, the concepts of pessimistic and optimistic estimators are persuasive and well-founded. The authors deliver thorough and robust theoretical results for the estimators, consistent with established practices in this field. Even though the outcomes are not surprising, I see no strong grounds to reject this paper. On the whole, this study presents a typical analysis within an interesting and novel setting. Please note, this is a preliminary review. I am currently delving into further details, including the proof. I may revise this review later. Weaknesses: It appears that a weakness resides in the need for a large budget. Moreover, verifying whether the condition is met could be challenging due to the somewhat complex form of the time budget constraint. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could we derive more practically applicable results for realistic settings by assuming specific models for rewards? For instance, in economics, we often presume particular models for a utility function that grows as the quantity of certain variables (e.g., consumption) increases. That is, if we define a certain mechanism (e.g., linear models) for the increasing rewards, could we achieve tighter results? If the authors' claims are correct, this study serves as a pioneering effort in this field. Therefore, incorporating such constraints could be seen as future work, and isn't necessarily a task the authors need to undertake at present. However, I am interested in understanding the potential for such an extension. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent on the review and for appreciating our work. Below we provide the responses to the Reviewer’s concerns. ## Weaknesses > It appears that a weakness resides in the need for a large budget. Moreover, verifying whether the condition is met could be challenging due to the somewhat complex form of the time budget constraint. A large time budget $T$ is required to allow the arms to almost reach their "regime" values. Indeed, even if in practice our algorithms perform well for way smaller time budgets (as shown in the experiments of Section 7), the theoretical guarantees consider the worst-case scenarios. We remark that a minimum time budget $T$ is indeed **unavoidable** as proved by our lower bound for the time budget (Theorem 6.1) which is matched (up to logarithmic terms) by our R-SR algorithm (Theorem 5.1). ## Questions > Could we derive more practically applicable results for realistic settings by assuming specific models for rewards? For instance, in economics, we often presume particular models for a utility function that grows as the quantity of certain variables (e.g., consumption) increases. That is, if we define a certain mechanism (e.g., linear models) for the increasing rewards, could we achieve tighter results? If the authors' claims are correct, this study serves as a pioneering effort in this field. Therefore, incorporating such constraints could be seen as future work, and isn't necessarily a task the authors need to undertake at present. However, I am interested in understanding the potential for such an extension. As the Reviewer noted, we address the setting in which the only assumptions relate to the (1) *non-decreasing* and (2) *concave* shape of the expected values, which are, in a sense, the less demanding ones (see also the Response to Reviewer fA5g for a discussion on the need for these assumptions). We agree with the Reviewer that considering particular functional forms of the expected rewards $\mu_i(t)$ (i.e., beyond (1) and (2)), proper of specific realistic settings (e.g. economics), will likely lead to tighter and more applicable results. In principle, one could consider a generic **known parametric functional form** for the expected rewards $\mu_{i}(t;\boldsymbol\theta)$ depending on an **unknown vector of parameters** $\boldsymbol\theta$, making use of suitable estimators and exploiting the uncertainty on the $\boldsymbol\theta$ estimate. A specific example of this is [1] in which a particular known polynomial class of functions is considered for the expected rewards $\mu_{i}(t;\boldsymbol\theta)$. We will add a comment on this in Section 8 of the final version of the paper. --- [1] Cella, L., Pontil, M., & Gentile, C. Best model identification: A rested bandit formulation. In International Conference on Machine Learning. 2021.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structured Voronoi Sampling
Accept (poster)
Summary: The paper pushes the frontier for gradient based sampling for auto regressive models. The paper lays clear theoretical issues to apply such techniques and address two issues: 1) the major contribution of the paper consists in proposing to construct voronoi like probability space over the output token embeddings to enable gradient based sampling process 2) the SVS algorithm also contains the novelty to deal with the non-continuity at the voronoi border by proposing a novel sampling algorithm. The authors test the proposed algorithm at a toy setting, confirming their theoretical superiority transfers to this toy dataset over gradient sampling baselines. The authors also test their algorithm at generation and conditional generation tasks where the algorithm shows good performance in terms of success, perplexity and diversity. Strengths: The paper has made good contributions in making the gradient based sampling more practical to tackle real problems. The theoretical contribution of constructing voronoi distribution based on embeddings to perform HMS sampling is a significant one; furthermore, the paper addresses some practical issues via a novel sampling algorithm. The theoretical layout is also quite clear, well highlighting the solved issues. The paper confirms their theoretical findings empirically as well as testing on two concrete NLP problems where the paper shows better than some existing NLP popular sampling techniques FUDGE. Weaknesses: My major problem for the paper is its presentation. The paper proposes a novel algorithm for sampling, however, all the algorithms have to be found in Appendix (minorly, these are not clearly indicated that it is in appendix for example for line 189). Note that the paper itself should be self contained and the appendix is really for interested readers to learn further details. Another minor point for presentation is that the empirical part for this paper is relatively short and doesn't contain very detailed analysis. While the theoretical explanations is key for this paper, the similarity between eq(3) and eq(1) or eq(7) and eq(10) makes me think that the theoretical part can be shortened while maintaining the readability. Finally, sampling is certainly not the only way (and arguably not the SOTA) way to perform conditional generation, RLHF used in InstructGPT or related techniques such as DPG (Khalifa et al. 2021) should be mentioned or in the best case compared to better situate the current work on its empirical aspect. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Table 1, all gradient sampling algorithms have a relatively high perplexity, which is reflected also in the examples in the appendix. Do the authors think it is an inherent limitation of this class of algorithms? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I appreciate the authors discuss the broader impact as well as transparently give insightful limitations. The text quality issue is largely mitigated in modern large language models such as GPT-4 etc. This might inspire the authors might to rethink ways to improve on the quality aspect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will make sure to move Algorithm 1. to the main body of the text. The reviewer suggested shortening the theoretical part to make space for experiments. We will make sure to add the additional experiments and contextualize them with the main text in the final manuscript. Since the main contribution is to offer a novel and principled way to apply gradient sampling for text generation, we devoted enough space to clearly explain our method. To this end, we need to systematically build up the knowledge that is not necessarily assumed to be known by all, which in fact is appreciated by the reviewers. We fear that shortening the theory part even further might confuse readers. > sampling is certainly not the only way (and arguably not the SOTA) way to perform conditional generation, RLHF used in InstructGPT or related techniques such as DPG (Khalifa et al. 2021) should be mentioned or in the best case compared to better situate the current work on its empirical aspect. In this work, we focus on vanilla language models. We agree with the reviewer that RLHF and instruction-finetuning change the probability distribution of the underlying LM, and can definitely impact the controllability of language models. Analyzing and adapting such models for controlled generation, however, needs further analysis which we leave as future work. We will discuss this point further in the Limitations section of the final manuscript. > In Table 1, all gradient sampling algorithms have a relatively high perplexity, which is reflected also in the examples in the appendix. Do the authors think it is an inherent limitation of this class of algorithms? Imposing a certain control on the generation process can introduce a trade-off between the success rate (in following the control) and fluency (that might be measured with perplexity). Such a trade-off exists not only for gradient-based sampling methods but also for other types of controlled generation methods (see Table 1. in [1]). Therefore, we do not believe that this is a particular limitation of gradient-based sampling methods. [1] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. DExperts: Decoding-time controlled text generation with experts and anti-experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. --- Rebuttal Comment 1.1: Title: Thank you for the detailed answers Comment: My comments have been addressed in the response, I hope that the presentation can be further improved when the paper is accepted to make the reading easier and more accessible by other readers. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to revisit the paper. We greatly appreciate it if you could also reconsider your score in light of the response and any new perspectives gained.
Summary: Authors have proposed a novel framework for gradient-based sampling from neural autoregressive LMs named Structured Voronoi Sampling. The core idea of the proposed approach is to map LM distribution to the embedding based version of it and then use newly proposed structured voronoi cells to perform sampling based on HMC. Authors carefully described each step along transformations and why it is theoretically sound. They discussed the conneciton of the proposed approach to related work such as COLD decoding and MuCoLa. They have done experiments in both toy task and controlled text generation to show the effectiveness and superiority of their approach. Strengths: # Originality this work proposes a very coherent way of treating discrete neural based LMs using continuous densities. Opposed to prior work, their method uses less approximations and heuristics. # Significance Research problems around decoding strategies including sampling based decoding is essential given the wide spread of large scale language models. This work performs an important step towards better understanding of the gradient-based sampling methods and how to apply it to LMs parameterizing discrete/categorical distributions. Weaknesses: # Experiments * Authors included samples from their methods and related work they have re-implemented, and these samples look pretty bad. Their proposed method repeats the same tokens right after each other making the sample look very unrealistic w.r.t. unknown data distribution e.g. "It is located in *the the* city centre near The Portland Arms.The Eagle is an Italian restaurant". This makes samples based results evaluations in the main text to be much less convincing. I tend to believe that the reason behind this is a poorly finetuned model: authors used GPT2 model which is quite out-dated while much better alternatives exist. Given that authors used modern GPUs (A100 40gb), they had all opportunities for choosing a stronger initial model. * Authors used only 1000 samples to analyze empirical distributions which is quite limited considering the vocabulary size and avg sentence length. Using much more samples and a better trained model could reveal more interesting observations. # Efficiency The proposed approach is very slow even compared to Langevin dynamics. It would help to address this in the main text and not in the Appendix. Also it would help to put usual ancestral sampling time to see the gap between gradient based sampling and ancestral sampling in general. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: General comment: I believe this is very strong theoretical work here which has rather weak experimental setting because of poor choice of the initial model (GPT-2) for fine-tuning and the task setting. I think this could become a much stronger submission upon stronger experiments are done, Questions: sec 7.2 lines 293 - 299. Its unclear what do you mean by reference distribution here and how LM is related to it? IIUC you are saying that ancestral sampled from GPT-2 finetuned on E2E dataset resembles the unknown reference/data distribution? I think that is a very rough approximation. I think ancestral samples from GPT-2 gives unbiased estimates of sequence level distribution induced by GPT-2, thats it. The underlying connection between finetuned GPT-2 and reference distribution is unknown but likely not that strong given the samples you shown in the appendix. Table 1: in the text you claim SVS outperforms everything else given metrics, but is it? It goes very close to Langevin per success and PPL (~ same given std), and diversity is lower than FUDGE / MuCoLa. Moreover, I think diversity should be compared to values computed over the data distribution, could you report that as well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Authors included broader impact section and discussed how their framework could help to alleviate negative implications of large LMs as well as being a generator of intentionally toxic content. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful assessment and feedback. **Experiments** > Authors included samples from their methods and related work they have re-implemented, and these samples look pretty bad. Their proposed method repeats the same tokens right after each other making the sample look very unrealistic w.r.t. unknown data distribution e.g. "It is located in the the city centre near The Portland Arms.The Eagle is an Italian restaurant". This makes samples based results evaluations in the main text to be much less convincing. I tend to believe that the reason behind this is a poorly finetuned model: authors used GPT2 model which is quite out-dated while much better alternatives exist. Given that authors used modern GPUs (A100 40gb), they had all opportunities for choosing a stronger initial model. First, we should quickly clarify two points: - We must first note that we put a hard limit of only generating sequences of certain lengths, therefore, one must look at these generations as incomplete, the example that is mentioned in this review can take a complete form as: "It is located in the the city centre near The Portland Arms. The Eagle is an Italian restaurant with a customer rating of 1 out of 5." - Another important point to mention is the unavoidable trade-off between success and perplexity, which also impacts larger models. Besides the two points mentioned, we agree with the reviewer that using a larger model can help to have more fluent generations. To address this comment we added another experiment with GPT2-large, please refer to the response to all reviewers for further details. > Authors used only 1000 samples to analyze empirical distributions which is quite limited considering the vocabulary size and avg sentence length. Using much more samples and a better trained model could reveal more interesting observations. In general and in the absence of access to ground truth distribution, it is hard to know whether a certain number of samples is enough or not. We follow prior work, e.g., MuCoLa, in terms of choosing the number of generations. **Efficiency** We will consider moving the efficiency analysis to the main body. Comparing inference times to ancestral sampling, however, can be misleading since with ancestral sampling one can not enforce any constraints. In other words, there is a certain extra computation overhead that will be introduced (not only with gradient-based sampling but with any other method, e.g., FUDGE) if one needs to control an aspect in the generations. **Questions** > sec 7.2 lines 293 - 299. Its unclear what do you mean by reference distribution here and how LM is related to it? IIUC you are saying that ancestral sampled from GPT-2 finetuned on E2E dataset resembles the unknown reference/data distribution? I think that is a very rough approximation. I think ancestral samples from GPT-2 gives unbiased estimates of sequence level distribution induced by GPT-2, thats it. The underlying connection between finetuned GPT-2 and reference distribution is unknown but likely not that strong given the samples you shown in the appendix. The ground-truth distribution in that experiment is set to be the finetuned LM distribution. As the reviewer stated, ancestral samples give unbiased estimates of this distribution. Therefore, comparing the distribution of drawn samples with ancestral samples distribution can be helpful in understanding which sampling algorithms are less biased. Also, please note that the samples shown in the appendix are from the controlled generation setup and not ancestral samples of the fine-tuned model. > Table 1: in the text you claim SVS outperforms everything else given metrics, but is it? It goes very close to Langevin per success and PPL (~ same given std), and diversity is lower than FUDGE / MuCoLa. Moreover, I think diversity should be compared to values computed over the data distribution, could you report that as well? We do not claim that SVS significantly outperforms everything. Contributions of this paper result in two methods: Langevin (which operates on Voronoi measure) and SVS, and as mentioned in section 7.3 “both Langevin and SVS result in a high success rate and maintain fluency and diversity, and SVS is effective in maintaining a balance between various metrics and producing fluent sentences that adhere to control targets.” As we also highlight in that section, FUDGE gives more diverse samples but this comes at the cost of significantly lower success rates. Thank you for your suggestion on adding the ancestral samples results, we added that as the first row (GPT-2) in Table 8. in the additional uploaded PDF. --- Rebuttal Comment 1.1: Title: thanks! Comment: thanks for response! I am satisfied with the provided answers and extra experiments and going to increase my score!
Summary: The authors present Structured Voronoi Sampling, which is a gradient-based sampling approach. To be specific, the authors map the discrete distribution by a language model and defines densities; the density is then used to sample, which the process is based on hamiltonian monte carlo. The novelty of this paper comes from the theoretical analysis, but the core weakness is the empirical result; not much performance gain is seen across the experiments. Strengths: - The paper is well-written and structured. The flow of the paper is easy to follow, and the authors explain the introduced concept step-by-step. - The paper proposes a novel gradient-based sampling method that caters for controlled generation tasks. Weaknesses: - Weak/wrong claims: i.e. Line 65-66. Not all language models share input and output embeddings. Authors should rephrase the sentence to avoid possible misunderstanding - The empirical result is weak. There is not much of difference compared to Langevin in success and ppl score in Table 1. The same applies in Figure 3. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. Is there any particular reason why the authors did not test on popular controllable text generation task? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to summary and weakness section Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. > i.e. Line 65-66. Not all language models share input and output embeddings. Authors should rephrase the sentence to avoid possible misunderstanding We will rephrase this as: *_in **most** language models, the weights are shared between the language model head and the embedding layer_* in the final version of the manuscript. >The empirical result is weak. There is not much of difference compared to Langevin in success and ppl score in Table 1. The same applies in Figure 3. We need to clear a misunderstanding here: both Langevin and SVS methods in Table 1 operate on the Voronoi measure that is introduced in this paper, thus are contributions of this paper. Regarding the advantage of SVS over applying Langevin Dynamics directly on the Voronoi measure, we must note that it depends on the task and computational budget. Based on the insights gained from the Toy experiment, the difference is more pronounced when the underlying distribution is more peaky. > Is there any particular reason why the authors did not test on popular controllable text generation task? Unfortunately, there is no established and popular benchmark for controlled generation. We chose the same setup as used in [1]. To address your comment, we added a new experiment on a sentiment control task. Please refer to the general response to reviewers for more details. [1] ​​Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. Diffusion-LM improves controllable text generation. In NeurIPS 2022. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: The rebuttal reads well. There seems to be a misunderstanding by me in Langevin in the paper. Thank you for the rebuttal, and I will change my score accordingly.
Summary: The paper proposes a new gradient-based sampling approach called Structured Voronoi Sampling (SVS) for controlled text generation. The key idea is to extend the discrete point distribution over word embeddings given by language models into a continuous density that spreads out probability over their corresponding Voronoi cells. A Hamiltonian Monte Carlo scheme is then devised to efficiently sample from that density, handling discontinuities between cells through a volume-preserving refraction/reflection trick. Empirical results on a toy problem and a more realistic controlled text generation problem show SVS can better match target distributions and control constraints compared to baselines. Strengths: Developing provably sound sampling methods for text generation remains an important open problem for language models, especially for controlled generation. The proposed method is well-motivated from first principles and provides formal guarantees unavailable with prior heuristic approaches, with rigorous mathematical derivations provided in appendices. A number of innovative steps are taken to get the core Voronoi idea to work, including (i) lifting the discrete token distribution to the embedding space, (ii) smoothing the measure-zero discrete distribution to a continuous density with the Voronoi transformation, and (iii) handling discontinuities in sampling with the refraction/reflection trick. These moves are independently interesting in their own right and a large segment of the NeurIPS community will likely find at least one of them to be novel and potentially applicable in future work. Related methods like MuCoLa and Fudge are discussed and compared in the experiments. The toy domain is pedagogically useful for exposing the limitations of existing approaches, while the scaled-up controlled generation results are promising, showing competitive fidelity and control. The paper is well-written and structured overall, and the writing systematically builds up the relevant concepts for a typical NeurIPS reader to understand the significance. The key generalizable ideas are appropriately emphasized. Weaknesses: (1) The most significant issue limiting the applicability of SVS as currently posed is the problem of calculating the base measure $\mu$ in a high-dimensional space. This problem in some sense puts us back where we started, as the original problem motivating SVS in the first place is that the integral in the normalizing constant of the energy function is not tractable. Computing the exact integral for the base measure in SVS is obviously also not tractable (as noted up-front as a limitation the paper, which I appreciated). The assumption in 6.1 that all cells have equal base measure is a very strong assumption, given that this clearly does not hold in practice, and somewhat undermines the very careful and rigorous progression of proofs of correctness. I see a couple way to strengthen this part of the paper, and would strongly recommend doing something to close this gap for the paper to be maximally impactful. One possibility is to provide some analysis of the consequences of violating this ‘equal base measures’ assumption: is there a bound on how bad violations can get? how badly is this assumption violated in the controlled generation task being reported? Intuitively, ignoring real diff them would result in over- or under-representing certain regions. A second possibility would be to take a first step toward some approximate method to account for base measure differences. For example, just taking distances to the first k nearest neighbors of a point could efficiently put a bound on how big the cell could be? Or some kind of Monte Carlo approximation? Clearly a full resolution of this problem is a task for another paper, and I do not expect this problem to be fully solved, but charting a course (even if that course is a bit inefficient) would go a long way. (2) More analysis could be provided on the toy example with known reference distribution; for example, what do the sampled distributions actually look like? In what way is MuCoLa off when temperature is low? How does this scale with the size of the space (e.g. if instead of a 2x2 square, it were a 2^k hypercube in k dimensions)? Does MuCoLa just take longer to burn in? or gets stuck sampling {0,1,2}? I would have liked to understand these failure modes a bit better to better motivate how the proposed method overcomes them. (3) The controlled generation task could also be strengthened: the results are from a single fairly bespoke 'food' task and classifier, which limits generalizability. It would be more compelling to evaluate on a more systematic suite of controlled language generation benchmarks to better pinpoint where the benefits of Voronoi sampling are most pronounced. Additionally, since controlled generation is highly dependent on the (trained) classifier being used to guide sampling, it would be helpful to test lower or higher capacity classifiers with different uncertainty levels. I worry that using the same classifier for both generation and evaluation (the ‘success’ column of Table 1) means we may just be measuring the sampler’s ability to overfit to a bad classifier (it is quite a bit more ‘faithful’ to this classifier than MuCoLa or Fudge, which is not necessarily a good thing if it is just ‘hacking’ a bad classifier). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. More exposition could be provided to unpack eq. (2) which may be confusing to a typical neurips reader. in particular, I think many practitioners are more familiar writing the final layer as a softmax applied to a linear transformation, i.e. something like $P(w_n | w_{<n}) = softmax(v * f(w_{<n}))$ where $v$ is the full embedding matrix). Some readers may also have forgotten that GPT2 used the same embedding matrix at the initial and final layers (other implementation use a separate fully-connected layer here), so it may be worth a gentle reminder to avoid confusion. 2. Similarly, some additional clarification could be given to interpret the denominator in Def. 2 and 3 as is essentially normalizing by the ‘volume’ of the cell (with the measure giving a suitable notion of volume). This will be obvious to those versed in measure theory but could help other ML researchers with less mathematical background. 3. I was trying to think through the precise relationship to other approaches based on nearest-neighbor smoothing (e.g. Khandelwal [et](http://U.et) al, 2019. “*Generalization through memorization: Nearest neighbor language models*.” or Khandelwal et al, 2020. “*Nearest neighbor machine translation”;* El-Kishky et al, 2023. “*kNN-Embed: Locally Smoothed Embedding Mixtures for Multi-interest Candidate Retrieval”*). These approaches have a very similar flavor to the voronoi embedding, which on the surface appears to be equivalent to some kind of importance sampling based on nearest-neighbors. Does the voronoi sampler reduce to something like this as a special case? 4. very minor: It was a bit confusing to have Alg. 1 prominently referred to in the main text (down to giving line numbers) but not be able to find it until I was later going through the supplemental. if it is important, it may be worth trying to fit into the main text. 5. very minor: the abstract uses the acronym SVS but it doesn’t reappear until Fig. 3, at which point I was very confused (pragmatically, it’s confusing that it’s referred to as ‘voronoi sampling’ in Fig. 2, right next to it, which implies that SVS is something different?) 6. very minor: is it worth pointing out in section 7.2 that there is no reason to actually prefer voronoi over regular ancestral sampling, given that ancestral sampling is already very efficient and provably matches the target distribution? And that we’re just doing this exercise (Fig. 3) as validation that the voronoi sampler is pretty close to the ‘gold standard’? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful analysis of our work and your valuable suggestions. (1) Regarding the assumption on base measures and approximations: thank you for your suggestions on how to strengthen the paper. Perhaps the way that is more practical to do it to approximate base measures is through importance sampling. An integration of importance sampling with SVS can look like this: at each reflection/refraction step (line 3 in Alg. 5), we need to compute the difference in the potential energy of two points: - If these points belong to the same Voronoi cell, then the base measures are equal. - If not, let’s suppose they belong to cells $C_m$ and $C_{m’}$. We take a number of samples from a Gaussian distribution with the mean set to the center of the corresponding Voronoi cells (let's call them $m$ and $m'$). We then approximate $\int_{C_m} \mathrm{exp}(-\frac{1}{2} \| g_m^t - x \|^2) = \int_{C_m} f(x) $ with $\frac{1}{N} \sum_{n=1}^N \frac{f(x)}{q(x)}$ where $q \sim \mathcal{N}(m, \varepsilon I)$. Ideally, one needs to use truncated Gaussian distribution as $q$, however, we might use the Gaussian distribution as an approximation. As the reviewer stated, analyzing and improving this approximation can be the subject of a separate project that we defer to future work. (2) Regarding extending the analysis on the toy experiment: thank you for this suggestion. We added more experiments on the toy model. Here is a summary of what we found: - In Figure 5, we vary the number of iterations. We observe that the difference between sampling methods is more pronounced with fewer iterations. - As suggested, we then look into the distribution of sampled elements at temperature 0.25 after 100 iterations. We observe that MuCoLa is sampling the highest probability element less frequently than Voronoi sampling or the reference distribution, while sampling {0, 1, 2} more often than needed. - Finally, we extend the toy model to hypercubes in k dimensions (figure 7). Generally speaking, as the dimensionality increases, the divergence between the samples’ distribution and the true distribution also increases in all sampling methods. Furthermore, Voronoi sampling consistently converges faster across different values for k. (3) Regarding the choice of the controlled generation task and classifiers: we added another experiment on a common task, please refer to the general answer to all reviewers for further details. Indeed, the classifier’s accuracy and certainty have a significant role in the generation quality. To ensure fairness as much as possible, we use the exact same classifier for the gradient-based algorithms, and a similar architecture for FUDGE. We also need to clear a misunderstanding here: the success measures reported in Table 1, are **not** evaluated by the same classifier used for controlling the generations. We train a separate and arguably more accurate “evaluator” classifier. Please refer to Table 5 for comparing the accuracies of classifiers. **Questions** Thank you for your suggestions, we will add the suggested clarifications and move Algorithm 1. to the main body of text in the final version of the manuscript. > I was trying to think through the precise relationship to other approaches based on nearest-neighbor smoothing (e.g. Khandelwal et al, 2019. “Generalization through memorization: Nearest neighbor language models.” or Khandelwal et al, 2020. “Nearest neighbor machine translation”; El-Kishky et al, 2023. “kNN-Embed: Locally Smoothed Embedding Mixtures for Multi-interest Candidate Retrieval”). These approaches have a very similar flavor to the voronoi embedding, which on the surface appears to be equivalent to some kind of importance sampling based on nearest-neighbors. Does the voronoi sampler reduce to something like this as a special case? We agree with the reviewer that on the surface these approaches might seem similar, however, there are major differences between Voronoi sampling and kNN-LMs that makes us believe that none is a special case of another. First, in kNN-LMs, a training dataset (or a set of examples) is cached, which will be then used during the generation. However, in this work, the centers of Voronoi cells are simply the words in the vocabulary. Second, sampling a new word in kNN-LM is still autoregressive and the underlying probability distribution is an interpolation between the LM probability and the distance to the cached exemplars. However, in Voronoi sampling, we sample the whole sequence at once, and the probability of sampling a sequence is an unaltered LM probability. > very minor: the abstract uses the acronym SVS but it doesn’t reappear until Fig. 3, at which point I was very confused (pragmatically, it’s confusing that it’s referred to as ‘voronoi sampling’ in Fig. 2, right next to it, which implies that SVS is something different?) We intentionally use different terms in these two figures. In the toy experiment, only one embedding is sampled, which is different from the text generation experiments where a sequence of embeddings is sampled. Therefore, we call the former Voronoi sampling and the latter structured Voronoi sampling to highlight this difference. We will clarify this further in the final version of the manuscript. --- Rebuttal Comment 1.1: Title: Thanks! Comment: I appreciate the thoughtful response and the improvements described, which strengthen an already very-strong paper.
Rebuttal 1: Rebuttal: We thank the reviewers for providing valuable and constructive feedback. We first provide responses to a shared concern raised by multiple reviewers. Responses to individual reviewers are provided below. We report the results of new experiments in an additional PDF. Multiple reviewers were concerned that we could have picked a more popular task for the controlled generation experiment. To address this concern, we added a new experiment on a more popular sentiment control task, which many of the prior works also experimented with [1, 2, 3]. The goal of the task is to control the sentiment of the generations. We use the same 15 prompts used in [1, 2] and generate 10 samples per prompt using **GPT2-Large**. Similar to prior work, we train classifiers on SST-2 dataset for sentiment classification. We use this classifier to enforce a positive sentiment in the generations. Results are shown in Table 8. of the additional PDF. We observe the following: - FUDGE mostly fails in following the control, while providing more fluent and diverse outputs. - MuCoLa achieves a higher success rate compared to FUDGE, but significantly lower success rates compared to Langevin or SVS, with a high variance in perplexity and success rate. - Both SVS and Langevin Dynamics on the Voronoi measure perform quite well in terms of following the control, and SVS achieves the best overall success rate. These results are more or less in line with the topic control experiment, i.e., Table 1. [1] Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations (ICLR), 2019. [2] Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. Constrained sampling from language models via Langevin dynamics in embedding spaces. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). [3] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. DExperts: Decoding-time controlled text generation with experts and anti-experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. Pdf: /pdf/59c91df3528806d54e85b2c2bb3dfdcdd0ae112e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes Structured Voronoi Sampling: a new gradient-based algorithm to sample from a distribution (i.e. a language model). The proposed approach leads to comparably fluent text whilst being able to better follow constraints (e.g. a topic) for the desired generation. Strengths: 1. a new sampling method that is a correct MCMC scheme (unlike previous gradient-based samplers) 2. evaluation on a synthetic task shows that the proposed approach better models the tail of the true distribution 3. evaluation on constrained language modeling shows that the proposed approach achieves good fluency and diversity, whist being able to follow the control target Weaknesses: 1. while the writing is generally great, it would help to have an introduction of gradient-based sampling before Section 3 2. it is not clear how the proposed approximation of the (costly!) base measure affects the correctness of the proposed method Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. some of the key algorithms and results are reported in the appendix. It would be good to include them in the main text upon acceptance 2. in eq (2), enc(context) is a single vector. This is not the case with current models (e.g. GPT-2) which have a vector per input token. Can you comment on how this affects the formulation of LM with Embeddings? 3. in eq (3), the conditioning on V_{<n} seems wrong, as `n` is only defined in the Cartesian product Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Regarding the suggestion on the presentation of gradient-based sampling, we will try to motivate gradient-based sampling further before going to the details in the final manuscript. > it is not clear how the proposed approximation of the (costly!) base measure affects the correctness of the proposed method We do not apply any approximation in our experiments. As stated in the paper, our approach offers an exact sampling algorithm, up to the computation of the difference in base measures. This paves the way for future works on potential strategies to approximate the difference in base measures. Our empirical findings indicate that SVS exhibits a strong performance when compared to MuCoLa. **Questions** 1. We will move Algorithm 1 to the main body in the final manuscript. 2. $\mathrm{enc}(w_{<n})$ is a single vector and can be the output of current LMs (like GPT-2). Concretely, to compute this vector with GPT-2, we pass the context $w_{<n}$ to GPT-2, and look at the last layer representation of GPT-2 **at position $n$**. We will make sure to clarify this further in the final version of the manuscript. 3. Thank you for catching the typo, the conditioning should be on V. We will fix this in the final manuscript. --- Rebuttal Comment 1.1: Title: Feedback on rebuttal Comment: Thank you for clarifying my concerns and engaging with the comments raised by the other reviewers!
Summary: Gradient-based sampling for text generation is an important challenge, as it allows for sampling from energy-based models, such as one defined by a mixture of experts as found in classifier-guided sampling. The main challenges in gradient-based sampling for discrete distributions are encoding the discrete distribution into $R^d$ and dealing with inevitable discontinuities that arise in the encoding. The paper proposes a method that addresses these two challenges with Voronoi measures and an application of refract+reflect HMC respectively. The method is validated in three settings: First, sampling from a non-structured and tractable discrete distribution with 4 classes and associated embeddings. Second, sampling from a language model. Third, sampling from a language model with additional constraints. In all settings, the method improves upon baselines, namely Fudge, MuCoLa, and Langevin dynamics (without reflection). Strengths: * The approach is well-motivated and interesting. * The writing is clear and easy to follow. * In text generation, unconstrained and constrained, the method does show improvements over the MuCoLa baseline. However, the gains from reflection and refraction seem quite small. Weaknesses: Experiment baselines: The paper only compares to MuCoLa and Fudge as text generation baselines, but other methods for controllable text-generation exist such as diffusion-LMs and other Gibbs with gradient (GwG) methods [2]. Since the claim is principled gradient-based methods for text-generation, not having a comparison to diffusion-LMs seems reasonable. However, I believe other GwG methods should be compared (see questions). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How is the method related to Gibbs with gradients methods, such as [1] and [2]? SVS takes advantage of embedding geometry and I believe GwG does not, which would plausibly lead to improvements. 2. It would have been nice to see comparisons to non-principled gradient-based sampling, such as COLD. A lack of rigorous justification for an existing method should not be enough to discount its (potential) effectiveness. 3. Why is the constrained text generation task different from the tasks studied in MuCoLa? 4. There are quite a few references to lines of algorithms in the Appendix. 5. Are the small gains from reflection due to the length of the chains? Would the difference between Langevin and SVS be larger if the computational budget was smaller? [1] Grathwohl, W., Swersky, K., Hashemi, M., Duvenaud, D.K., & Maddison, C.J. Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. ICML 2021. [2] Zhang, R., Liu, X., & Liu, Q. (2022). A Langevin-like Sampler for Discrete Distributions. ICML 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for bringing up the Gibbs with gradient methods. We will ensure their discussion in the related works section of the final manuscript. However, in terms of empirical evidence, [1] doesn't feature any experiments on language generation. We believe that applying this approach to sample a text sequence would be impractical without further improvements. The primary reason is that both [1] and [2] operate within the logits domain. In the context of language generation, this entails sampling each element from a vocabulary of approximately 50,000 items (as in the case of GPT-2 small). This vocabulary size is significantly larger compared to MuCoLa or SVS, which function in the embedding space $\mathbb{R}^{768}$. While [2] does present an experiment on the infilling task, it only samples 25% of tokens within sentences. This setup is computationally less demanding than the generation settings outlined in our paper. To the best of our knowledge, the code for the infilling experiment hasn't been released. Consequently, additional research and implementation are necessary to scale these methods for sampling sequences of length 20 or more. **Questions** 1. The benefit of Gibbs with gradient methods is that they provide bounds for convergence. However, as stated by the reviewer, they do not benefit from the similarity of the word embeddings when navigating the state space. As mentioned above, such approaches are considerably more expensive, which limits their applicability in real-world text generation settings. 2. We’ve attempted to adapt COLD for the controlled generation experiment. Unfortunately and even after increasing the control term’s weight, the outcomes closely resembled those of the uncontrolled GPT-2, resulting in very low success rates. We plan to persist in our efforts to improve COLD results by experimenting with varying hyperparameters. If we observe any improvement, we will include the updated results in the final manuscript. 3. Unfortunately, there is no widely used benchmark for controlled generation. Even when two papers use the same benchmark, the exact prompts that are used could be different, which limits the reproducibility and fairness of the results. In this paper, we followed the experimental setup in [3]. To address your comment, we added a new experiment on a task that is studied in MuCoLa and other prior works, please refer to the general response to the reviewers for further details. 4. We will move Algorithm 1. to the main body of text in the final manuscript. 5. Yes, that could very well be the case. However, it is hard to empirically test this hypothesis on the language generation experiments, since the ground truth distribution is unknown. To answer this, we added more experiments on the toy model (please see Figure 5) that supports this hypothesis. We observe that SVS has the lowest JS divergence compared to HMC on the Voronoi measure, and the difference is more pronounced when doing fewer iterations. [3] ​​Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. Diffusion-LM improves controllable text generation. In NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: My initial score did not reflect the strength of the paper, and will be increased to accept. Regarding comparisons to Gibbs with gradient (GwG) methods: More experiments would be very nice to have, but not necessary. The proposed method is more general than sampling from only an embedding-parameterized language model, and therefore would ideally also be compared to GwG methods in at least a toy setting. The likeliness of the proposed method outperforming GwG (due to the additional assumption of access to embeddings) is an opportunity to broaden the impact and strengthen the paper, rather than a weakness. One possible experiment would be to show error and runtime / steps for an embedding-parameterized model from [1] or [2], such as a Potts model, at various numbers of classes / embedding dimensions. [1] Grathwohl, W., Swersky, K., Hashemi, M., Duvenaud, D.K., & Maddison, C.J. Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. ICML 2021. [2] Zhang, R., Liu, X., & Liu, Q. (2022). A Langevin-like Sampler for Discrete Distributions. ICML 2022. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response, and your great suggestion on comparing SVS and GwG on a toy model. We will work on this and consider adding it to the final version of the manuscript.
null
null
null
null
Complete Neural Networks for Complete Euclidean Graphs
Reject
Summary: The authors provided theoretical analyses and proof that the 3-WL algorithm and the Euclidean version of the 2-WL algorithm can distinguish any complete Euclidean graph pairs. The authors then demonstrated that the algorithm can be approximated with GNNs and ran the proposed model on synthetic data to show that it was indeed able to "separate" the hard graph pairs. Strengths: 1. The authors provided rigid mathematical formulation and proof. The problem was well-defined and formulated in the paper. 2. The organization of the paper is clear. The authors first discussed the problem, followed by theoretical analyses and proof. Model architectures and experiments are then provided to support the theoretical claims. 3. The theoretical results are general and can be potentially applied to a wide range of models. Weaknesses: 1. The experiments are inadequate. The authors only tested their model and the baselines on small-scale synthetic datasets. The dataset used in the paper only contains small molecular graph pairs whereas, in practical 3D point cloud scenarios, there can be easily thousands of points. Furthermore, at such a scale, many traditional GNN-based networks including MACE and TFN are also "separating" according to the results. The authors may test their model and the baselines on larger practical datasets. 2. The potential benefit of a model being "separating" was also not experimented. Eventually, we want models to output some meaningful values in the classification or regression task. The author may experiment with such tasks to demonstrate the capability of graph isomorphism tests indeed help with representation learning. 3. The potential application of the algorithm is greatly limited by the assumption that the graph is complete. The authors also mentioned in the future work section that the proposed model scales as $O(n^4)$ with respect to the number of nodes, which is prohibitively large even for small point clouds. 4. The writing of the manuscript needs to be improved in some places. All references were not in parenthesis, making it hard to read the manuscript (e.g. line 143 when quoting the WL test). The quotation marks are not paired. The sections are also a bit strange. I would suggest making the related work and future work a separate section and the current Sec 3 into a subsection (as it is also a theoretical analysis). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Regarding the experiment part, how will the proposed model behave when scaling up to larger practical datasets? (See Weakness 1) 2. Regarding the experiment part, will the capability of graph isomorphism tests help with representation learning and other downstream tasks? (See Weakness 2) 3. The proposed model scales as $O(n^4)$ with respect to the number of nodes. How can it scale up to practical point clouds with thousands of points? Can we drop or relax the *complete graph* assumption? This problem is crucial as the locality assumption is crucial for GNNs. 4. Will the theoretical results hold if locality is assumed? That is, each vertex only aggregates information from a subset of the vertices (its nearby neighbors). I doubt they would also hold under the locality assumption. Consider the simple case of two regular triangles and one regular hexagon, any rounds of 1-EWL will not be able to distinguish from them. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have mentioned the limitation of this work in the manuscript and no further negative societal impact is expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below are our responses to the questions and concerns. **Concern Regarding Experiments**: As stated in the Author Rebuttal and our responses to the other reviewers, this work is a theoretical work, which does not aim to devise a practical implementation that can compete with state-of-the-art methods. Rather, we proved several novel results regarding the separation power of WL tests. Our synthetic experiments were only meant to empirically validate our theoretical results, and we believe that they are the most suitable tool for this end. **Benefits of Separation:** It is already established in the literature that separation is a desired property for learning algorithms that respect data symmetries. As mentioned in our response to Reviewer eAUn, it was shown in works such as [Pozdnyakov et al.] [Chen et al.] that methods that lack separation power often have inferior practical performance. While MACE and TFN succeeded to distinguish the point-cloud pairs in our synthetic experiment, these methods are not guaranteed to separate all pairs of 3D point clouds. We do note, however, that the instances of MACE and TFN evaluated in our experiments consider 3-tuples of points, and thus are likely to have more separation power than MPNNs, which only consider pairs of points. This is further indicated by the superior performance of MACE and TFN over MPNN in our experiments, as well as in real-world tasks [Batatia et al. 2022]. This further correlates with the observation that algorithms with higher separation power tend to have better performance in practical tasks. **Regarding the $\mathcal{O}(n^4)$ running-time complexity of our algorithm:** Our method is the most computationally efficient method to date that provably separates 3D point clouds. Any algorithm on point clouds that is continuous and provably separating is likely to come at a cost of a high running time. While we are not aware of any established lower bounds, the best previous results to date require using prohibitively high-order tensors ($\mathcal{O}(n^{poly(n)})$) to achieve separation [Dym, ICLR 2021], [Lim, ICLR 2023]. We wish to point out the following computational bottleck: Achieving a running time of less than $\mathcal{O}(n^4)$ with an algorithm based on 2-WL, would require developing a continuous and separating embedding of multisets in $\mathbb{R}^3$ that has a running time of less than $\mathcal{O}(n^2)$, which is nontrivial. We note that once one is willing to forego guaranteed separation, there exist many heuristics to implement WL-based architectures with a significantly reduced running time. Such heuristics include considering local neighbourhoods rather than the whole graph [Feng et al.]. We intend to add a clarification of this issue to the manuscript. **Regarding the small number of points in the clouds in our evaluation:** Any architecture that is as powerful as high-order k-WL is likely to incur a high computational cost. For example, the well-known PPGN [Maron et al.] requires a running time of $\mathcal{O}(n\^5)$. For practical applications they used a relaxation with a running time of $\mathcal{O}(n\^3)$, which is still inapplicable in settings with thousands of points. Yet, their architecture is considered a cornerstone in the study of learning algorithms on graphs and point clouds. **Question regarding local neighbourhoods**: Our theoretical guarantees are only applicable to point clouds and to complete Euclidean graphs. The reviewer's understanding is indeed correct that if locality is allowed, then the separation results will not hold. Relaxations such as allowing locality may nevertheless be used in practice to significantly reduce the running time. We believe that this modification may incur a lesser degradation of performance than using architectures that are based on $1$-WL tests, which are non-separating to begin with. This was demonstrated in [Feng et al.], whose relaxation of a $2$-WL variant to local neighborhoods performed significantly better than MPNNs. Lastly, we note that $k$-WL tests with $k>1$ are not defined with a notion of locality, and yet they are the predominant method used to upper-bound the separation power of GNNs [Geerts et al.], [Morris et al.], [Maron et al.], some of which do perform aggregation on local neighbourhoods. Thus, the question of separation of graphs assuming full connectivity is still relevant to these models. **Comments regarding writing:** We highly value the reviewer’s suggestions regarding improving the readability of our manuscript, which we will implement adequately. **References** [Geerts et al.] Geerts, F., & Reutter, J. L. (2022). _Expressiveness and approximation properties of graph neural networks. arXiv_ preprint arXiv:2204.04661. [Chen et al.] Chen, Z., Villar, S., Chen, L., & Bruna, J. (2019). On the equivalence between graph isomorphism testing and function approximation with gnns. _Advances in neural information processing systems_ , 32. [Morris et al.] Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pp. 4602–4609, 2019b. [Maron et al.] Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. _Advances in neural information processing systems_ , 32, 2019. [Feng et al.] Feng, Jiarui, et al. _Towards Arbitrarily Expressive GNNs in $ O (n^ 2) $ Space by Rethinking Folklore Weisfeiler-Lehman. arXiv_ preprint arXiv:2306.03266 (2023). --- Rebuttal Comment 1.1: Title: Comment on Authors' Rebuttal Comment: I appreciate your comprehensive rebuttal regarding my previous questions and concerns. Though some questions are properly addressed, I'd like to reiterate the concerns that I believe did not get well-addressed in the rebuttal. 1. **The experiments**. I acknowledge the theoretical nature of this work, but still, I don't think the experiments have demonstrated the claimed advantage over other baselines. I do not doubt the completeness of the proposed model, as you have provided solid mathematical proof. Nonetheless, it seems that neither your established theorems nor the experiment results have ruled out the possibility that normal GNNs with locality assumptions also have the expressiveness of distinguishing the geometrics. You may refer to the GWL paper (https://openreview.net/forum?id=Rkxj1GXn9_) as an example, in which the authors also tried to deal with a theoretical formulation of distinguishability of geometric GNNs and provided experiment results on a wide range of synthetic data. The experiment results from GWL provided evidence for the theory, and the authors also made plausible interpretations of the failed cases, both of which are lacking in this paper. 2. **The motivation/benefit of separation**. I will make a clearer statement why the motivation for separation is somewhat dubious in my view. I'd like to first point out that the separation problem you and Pozdnyakov et al. referred to arises only for non-continuous target functions like categorical information, as previous work (e.g., https://openreview.net/pdf?id=6NFBvWlRXaG) has already demonstrated the universality of TFN for approximating any continuous equivariant functions. For categorical labels, I personally do not consider it necessary to exact Euclidean graph isomorphism. For example, a slightly morphed bunny point cloud should be still recognized as a bunny, and one molecule may adopt various plausible conformations (3D geometries). In these scenarios, we instead want the model to produce somewhat *invariant* predictions to demonstrate robustness to the perturbation or noise on the data. I'd appreciate it if you can come up with some practical applications in which the separation of non-continuous target functions is desired. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt feedback on our rebuttal. We apologize that we have not yet addressed all of your concerns. Below is our response, which we believe should answer both of your remaining concerns. ### Concern No. 1 ### **Regarding the separation power of normal GNNs:** For most rotation-permutation invariant architectures, the separation power is still unknown. All that is known is that: (a) 1-WL-like architectures do not separate [Pozdynakov et al.]; (2) TFN is universal/separating [Dym-Maron, ICLR 2021]; and now, using our paper, we know that (c) 3-WL- and 2-WL-like architectures are separating. Note that resolving the separation question for each of these three architectures required a non-trivial theoretical paper. Thus, while we agree that it would be interesting to study the separation power of other architectures, this is outside the scope of this paper. Specifically, regarding GNNs that allow for incomplete graphs — such architectures will clearly not be complete if, for example, the graph is disconnected. **Regarding empirical evaluation:** While we appreciate the extensive empirical evaluation in the GWL paper, note that many of the other papers quoted in our rebuttal (e.g. [Dym-Maron, ICLR 2021], [Villar et al. NeurIPS 2021]) have a much more limited empirical evaluation. For example, [Aamand et al. NeurIPS 22] have only shown simulations of their proposed WL method on sampled graphs. Nonetheless, to address your concern, we have further conducted a separation experiment on real-world water tetramer pairs, proposed in [Pozdnyakov et al.]; see results in the following Official Comment. The results demonstrate that our SEWLnet, as well as MACE, achieve separation, while the 1-EWL simulation, TFN and GVPGNN do not achieve separation. Note that while MACE was able to distinguish the molecule pairs in this experiment, it was not proven to be separating. ### Concern No. 2 ### **Regarding Separation and Universality:** Firstly, separation and universal approximation in the continuous domain are tightly interrelated: A model is separating if and only if it can be made universal by composing it from the left with an MLP. For more on this see, e.g. Theorems 17-23 in the GWL paper, Theorem A.1 in our appendix, and the other references discussed in our introduction (the paragraph beginning at line 19). See also Section 2.2 and the figure therein, which illustrate why the failure of separation leads to non-universal models. We will be happy to make the motivation for separation clearer in the main text. **If we already know that TFN is universal, why bother discussing other algorithms?** Note that the proof of TFN universality requires arbitrarily high-dimensional representations of SO(3), and thus requires the prohibitively high $\mathcal{O}(n^{poly(n)})$-time complexity to achieve universality. In contrast, our result requires only $\mathcal{O}(n^4)$ time. This is accomplished by showing, for the first time, that separation can be achieved using only simple low-dimensional invariants of SO(3): inner products and vector products. The ability to achieve separation/universality using low dimensional invariants was previously recognized as an important open question in several papers, including the paper that proved the universality of TFN [Dym-Maron, ICLR 2021] (see quotes in our Response to All Reviewers). In addition, the study of the separation of geometric k-WL algorithms is a natural question in its one right, motivated by the centrality of k-WL algorithms and their separation power in the GNN literature [Geerts et al.]. **On the importance of separation in discrete classification:** Since most discrete classification methods rely on continuous feature vectors, the requirement that these vectors be computed using an invariant, separating, and continuous architecture is very natural: - **Invariance:** Guarantees that permutations and rotations do not influence the classification outcome. - **Separation:** Guarantees that the representation vectors calculated by the model can maintain all the information in the raw input required for classification. Non-separating models may fail in classification due to loss of information. - **Continuity:** Enables the classifier to be robust to local perturbations. We stress that we do not view discrete classification and continuous regression tasks as essentially different, since in practice classification models typically map data to a distribution over the labels using a continuous function that approximates a non-continuous one (e.g. the softmax function vs. 1-hot vector). Such continuous invariant mappings can provably be approximated by separating architectures.
Summary: The paper analyzes neural networks for point clouds toward modeling of geometric phenomena. It considers the application of message passing networks/GNNs to Euclidean graphs, whereby a variation of the well-studied k-WL test is adapted to point clouds by using a complete graph on the point cloud and making use of Euclidean pairwise distances. To this effect, the authors propose the k-EWL test and show that: (1) For k=1, two iterations of message passing are sufficient to separate most point clouds in any dimension, (2) A single iteration is sufficient for all point 3D point clouds when k=3. Furthermore, additional differential architectures are proposed and demonstrated to have similar separation power as k-EWL tests. I think the paper has some promise and is generally well-written. But it can be strengthened by better motivating the problem and providing more detailed experimentation, ideally on chemistry/molecular datasets (given that this was cited as a motivation/application in the intro). I think the paper needs some more work, but addressing some of these points would make me open to raising my score. Strengths: (1.) The paper is generally easy to follow and well-written. I understood the definitions and theorem statements without any problem. (2.) GNNs + point clouds seems like an underexplored area, and the authors make progress in this area (motivation nonwithstanding). Weaknesses: (1.) I think there is some motivation lacking for why one wishes to separate point clouds via GNNs, or why it is desirable to construct variants of k-WL. The paper hints at the importance of this for chemical applications, but there isn't much discussion about this beyond the intro. (2.) The formulation of EWL doesn't seem novel; it is simply a standard MPNN on a complete graph with use of distance as an edge feature in the update rule (in the framework of Gilmer et. al 2017). SEWL seems more interesting, but the motivation is a bit lacking. (3.) The experiments could be stronger. While the authors provide experiments on synthetic datasets of point clouds demonstrating effectiveness of the proposed architectures, the paper could benefit from some experiments on real-world chemistry tasks, as chemical applications, biological molecular datasets, etc. were cited as a motivation in the intro. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Suggestions: (1.) Make the motivations in the intro stronger (see weakness above). (2.) Including further experiments, e.g., on real world chemistry datasets (in line with the motivation in the intro) would highly strengthen the paper. Questions: (1.) Could the authors comment on non-neural net based approaches for the task? How does the proposed approach compare to these? The methods in Table 1 seem to be all NN-based. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Some limitations are highlighted in the Future Work section at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for the valuable feedback. Below are our responses to the questions and concerns. **Motivation for Separation**: Separation is a desired property for machine-learning algorithms on point clouds, which bears both practical and theoretical importance. For example, it can be shown that for any neural network that is not separating, there exists a continuous invariant function that it cannot approximate. Thus, non-separating neural networks are not universal approximators of invariant functions. This theoretical weakness often manifests as inability to reach a low training loss in real-world tasks. For instance, [Pozdnyakov et al.] have shown that a model that does not separate two point clouds that are non-isomorphic, has poorer performance in chemical regression task; see [Pozdnyakov et al. Section v Figure 8]. Similarly, GNNs that cannot separate non-isomorphic graphs show inferior empirical performance in practical learning tasks compared to GNNs that do separate [Chen et al.]. **Motivation for WL:** The Weisfeiler-Leman (WL) hierarchy is currently the predominant method of measuring the separation capability of GNNs. Notable examples include [Morris et al., Xu et al., Maron et al. 2019b]. Architectures such as these are widely used for real-world tasks on point clouds [Feng et al., Maron et al. 2019b]. New variants of high-order WL are frequently introduced to obtain more robust or fine-grained separation power. Here we introduced a novel variant of 2-WL to obtain provable separation of point clouds with a lower time complexity than existing methods. We will modify our manuscript to clarify this motivation. **Concern Regarding Experiments:** As mentioned in our general response above, this paper is a theoretical paper, whose aim is not to propose a specific algorithm to compete with state of the art, but rather to analyze the separation power of the k-WL test on point clouds. The main purpose of the included experiments was to validate our theoretical results. We believe that our synthetic experiments are sufficient for this purpose. Nonwithstanding, our theoretical results have high relevance to architectures that are used in practice, as many models that are as powerful as 3-WL show a strong performance on real-world tasks, e.g. the QM9 molecular property prediction benchmark [Maron et al, Feng et al.]. Furthermore, our proposed 2-EWL lays the foundation for the development of architectures with a strong separation capability and a lower running time than the aforementioned 3-WL-based methods. **Regarding the novelty of EWL tests:** While 1-EWL has been previously proposed [Pozdnyakov et al.], $k$-EWL tests with k>1, as well as $k$-SEWL tests, are indeed a novelty in our work. Moreover, 1-EWL based on MPNN, introduced in [Pozdnyakov et al.], is a template algorithm, which relies on black-box embeddings of multisets in $\mathbb{R}$ that are required to be separating. Another novelty of our work is that we introduce a concrete $\textit{instantiation}$ of this test, with a continuous separating embedding. This endows our instantiation the power to separate almost any 3D point cloud, while maintaining efficient running time. The original motivation for developing the SEWL tests was to be invariant to rotations but not to arbitrary orthogonal transformations, e.g. reflections. We then used this test to derive the 2-EWL test, which is separating on 3D point clouds while having a lower computational and memory complexity than the vanilla 3-EWL. We then propose a continuous implementation of this test in $O(n^4 \cdot log(n))$ time — the lowest complexity of a continuous separating algorithm that we are aware of. **Non-Neural Network Methods:** We thank the reviewer for this suggestion. We intend to add a discussion of non-neural methods such as [Bigi et al.], [Drautz], [Dusson et al.] to our manuscript and consider evaluating some of them in our synthetic experiments. **References:** [Bigi et al.] Bigi, Filippo et al. _“Wigner kernels: body-ordered equivariant machine learning without a basis.” ArXiv_ abs/2303.04124 (2023): n. pág. [Drautz] Ralf Drautz, “Atomic cluster expansion for accurate and transferable interatomic potentials,” Phys. Rev. B 99, 014104 (2019). [Dusson et al.] Genevieve Dusson, Markus Bachmayr, G´abor Cs´anyi, Ralf Drautz, Simon Etter, Cas van der Oord, and Christoph Ortner, “Atomic cluster expansion: Completeness, efficiency and stability,” Journal of Computational Physics 454, 110946 (2022). [Gasteiger et al. ] Gasteiger, Johannes, Florian Becker, and Stephan Günnemann. "Gemnet: Universal directional graph neural networks for molecules." Advances in Neural Information Processing Systems 34 (2021): 6790-6802. [Dym et al. 2023] Dym, Nadav, and Steven J. Gortler. "Low dimensional invariant embeddings for universal geometric learning." arXiv preprint arXiv:2205.02956 (2022). [Zhao et al. NeurIPS 2022] Zhao, Lingxiao, Neil Shah, and Leman Akoglu. "A practical, progressively-expressive GNN." Advances in Neural Information Processing Systems 35 (2022): 34106-34120. (Please note that due to insufficient space, further references are in the Author Rebuttal ) --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank you for clarifying several points and taking the time to answer my questions. Also, I'm happy to see that you will add a discussion on non-neural network based methods to the write-up. I'm raising my score accordingly. --- Rebuttal 2: Title: Response by another reviewer Comment: Dear fellow reviewer, > I think there is some motivation lacking for why one wishes to separate point clouds via GNNs, or why it is desirable to construct variants of k-WL. The paper hints at the importance of this for chemical applications, but there isn't much discussion about this beyond the intro. I'd like to point you to an emerging line of work on separating point clouds via GNNs, both from a theoretical and experimental perspective. There is a growing interest in separation as a design principle for 3D geometric GNNs -- models' relative abilities to separate point clouds is one possible measure of **expressive power**. - PhysRev: https://arxiv.org/abs/2001.11696 - NeurIPS: https://arxiv.org/abs/2206.07697 - ICML: https://arxiv.org/abs/2301.09308 (These are just some prominent/recent ones. These models are being used for a myriad of AI for Science applications.) This paper and others on separation give us a useful mental framework to compare architectures in this emerging and important class of models in an abstract manner, while removing implementation details. > The formulation of EWL doesn't seem novel I believe the authors don't claim it is novel, either. They build upon the work of Pozdynakov-Ceriotti-2022 (https://iopscience.iop.org/article/10.1088/2632-2153/aca1f8/meta). Best, Reviewer Z66V
Summary: This paper studies the theoretical completeness of neural networks for Euclidean/3D point clouds, from the perspective of whether they can distinguish all non-isomorphic point clouds. Key theoretical contributions include showing that variations of the k-WL graph isomorphism test are complete for 3D point clouds, and that distance-based 1-WL tests are complete for *almost all* point clouds (measure theoretic perspective). The work also demonstrates that a GNN can be designed with the proposed completeness guarantees, and sanity checks the theoretical results on synthetic counterexamples from previous studies. Strengths: - This work shows that adaptations of the k-WL hierarchy of graph isomorphism tests can be 'complete' on 3D point clouds. I believe this is a **novel** theoretical contributions for neural networks on point clouds in Euclidean space. - I believe the findings are **significant**, as neural networks on Euclidean graphs and point clouds are an emerging area of interest from both theoretical and applied perspectives. - The paper is **well written** and **clear** in terms of presentation: - The Introduction does a good job highlighting the research gap. - The coverage of related work in Section 1.1 is useful. - Section 2 makes a good bridge from WL to the Euclidean setting. - I went through the proofs, which are correct to the best of my understanding. Weaknesses: - It seems challenging to translate this paper's ideas into practice as the model's parameters depend on the number of points $n$ taken as input. This probably makes it very difficult to build a trainable model **while retaining** theoretical guarantees. - The authors are upfront about this when discussing limitations. - Beyond sanity-checking the theoretical ideas on the counterexample from Pozdnyakov-Ceriotti, 2022, the synthetic experiment does not seem to provide any further insights into practical instantiations of the ideas in this paper, or about this class of models more broadly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions and clarifications: - Regarding E(n) Equivariant GNN (Satorras et al.) being less expressive than 1-EWL: won't more than one iteration of E(n) Equivariant GNN be able to distinguish between the counterexample of Pozdnyakov-Ceriotti, 2022? - The appendix states that the experiments use the QM9 variant of E(n) Equivariant GNN without the position updates. If you meant that this version is less expressive than 1-EWL: yes, in that case I agree, but that version of the model is not what I as a reader would usually consider E(n) Equivariant GNN. That model is invariant. - Regarding the relationship between Theorem 3.2, Figure 2, and 2-SEWLnet: technically, if a layer is injective or complete (as you prove for one iteration of 2-SEWL), **why do we even need to stack multiple of them?** - Regarding Theorem 2.1 and Theorem A.2: - The main takeaway here is that 1-EWL is sufficient to separate almost all point clouds. The reason is that, for counterexamples which cannot be separated by 1-EWL, the size of the manifold that those counterexamples belong to is very small w.r.t. that of all possible point clouds. Is my understanding correct? - On line 533-535, I tried to follow the argumentation but: (1) Could you expand on why the dimensionality is <= 3n-1? (2) Basu et al. is a textbook; is there a better reference? Is this a very simple result? - Regarding experimental setup: Why were different # of layers used for different models? Suggestions: - In Figure 1, it may be useful to draw the actual graphs and also state the exact figures from the Pozdnyakov papers that each graph is taken from, to be least ambiguous. - Fix the citation for the E(n)-GNN paper (it should appear as Satorras et al., 2021 and include the rest of the authors). - Consider adding the equation for 2-EWL after eq.4. - Consider discussing the ACE framework and MACE in Related Work, as this is a framework for building a complete basis for equivariant functions on a set of points up to some interaction body order. - Typo in line 235. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations but not discussed any potential negative social impact. Beyond what the authors mention regarding practical instantiation of their models, one major theoretical limitation is that the framework is restricted to complete geometric graphs, and the construction of complete/universal models for the general sparse graph setting remains an open question. This my be worth reiterating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. Below are our responses to your questions/concerns. **Concerns regarding translating ideas into practice:** Indeed the separation guarantee incurs a high computational cost. However, once one is willing to forego guaranteed separation, there exist many heuristics to implement WL-based architectures with a significantly reduced running time, while having minimal impact on the performance in real-life tasks; for example [Feng et al.] [Morris et al.]. **Concern regarding experiments:** This manuscript aims to theoretically prove the expressiveness of WL-based architectures on point clouds — a fundamental theoretical gap not addressed in the literature. Developing an efficient implementation of the WL hierarchy for practical tasks is indeed an interesting further research direction. However, it is outside the scope of this paper. **Regarding EGNN:** Indeed our choice of naming was inaccurate and has misattributed the lack of separation to the original EGNN, while the non-separating algorithmm used in our experiments was the variant with no coordinate updates. We will change the model name from EGNN to 1-EWLsim, denoting a simulation of 1-EWL, to avoid this lack of clarity. **On Stacking Multiple Layers:** While indeed one layer of our network is injective, and thus it can be used to approximate any continuous function (see Lemma A.1), it is often required to stack multiple leayers to learn high-level representations. This was observed in a variety of architectures in many domains; see [Bengio] for a detailed discussion of this phenomenon. We will add a clarification on this to the manuscript. **Regarding Theorem 2.1 and Theorem A.2:** (a) Yes, the set of counterexamples that cannot be distinguished by 1-EWL is indeed of measure zero. (b) To prove this, we show that this set is contained in the zero-set of a nontrivial polynomial. We then combine this with the fact that the zero-set of a non-trivial polynomial is always dimension deficient, and thus has measure zero. This is a well-known result, which holds all for the larger class of analytic functions as well. See, for example, Proposition 3 in [Mityagin 2020]. **Number of layers:** The different number of layers is based on the default choice of hyperparameters of the respective architectures, which we attempted to optimize. We presented the results with the best choice of hyperparameters for each architecture. **Suggestions:** We highly value the reviewer’s suggestions and shall incorporate all of them into our manuscript. **References** [Pozdnyakov et al.] Pozdnyakov, S. N., & Ceriotti, M. (2022). _Incompleteness of graph convolutional neural networks for points clouds in three dimensions. arXiv_ e-prints, arXiv-2201. [Bartók et al.] Bartók, Albert P., Risi Kondor, and Gábor Csányi. "On representing chemical environments." Physical Review B 87.18 (2013): 184115. [Caron, Richard] Caron, Richard. (2005). _The Zero Set of a Polynomial._ 10.13140/RG.2.1.4432.8169. [Bengio] Bengio, Yoshua. "Learning deep architectures for AI." _Foundations and trends® in Machine Learning 2.1_ (2009): 1-127. [Cybenko] Cybenko, George. "Approximation by superpositions of a sigmoidal function." _Mathematics of control, signals and systems_ 2.4 (1989): 303-314. [Chen, Chi et al.] Chen, Chi et al. “Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals.” _Chemistry of Materials_ (2018). [Schutt et al.] Schütt, Kristof T., et al. "Schnet–a deep learning architecture for molecules and materials." _The Journal of Chemical Physics_ 148.24 (2018). [Xu et al.] Xu, Keyulu, et al. _"How powerful are graph neural networks?." arXiv_ preprint arXiv:1810.00826 (2018). --- Rebuttal Comment 1.1: Title: Concerns addressed; score increased Comment: Thank you for the clarifications. My concerns have been addressed, and I have raised my score to reflect this. > Developing an efficient implementation of the WL hierarchy for practical tasks is indeed an interesting further research direction. However, it is outside the scope of this paper. I agree. I only meant that the model which realizes this paper's theory, **in its current form**, may not be practical. But I agree that this is not the paper's main contribution. This could even be a direction for follow up work. > We will change the model name from EGNN to 1-EWLsim That makes sense. I think one would expect an equivariant model when reading 'EGNN'. > While indeed one layer of our network is injective, and thus it can be used to approximate any continuous function (see Lemma A.1), it is often required to stack multiple leayers to learn high-level representations. Understood. This aspect of the paper is very interesting! > Mityagin 2020 Perhaps you missed adding this reference? I cannot find a single author paper by Mityagin in 2020 (https://scholar.google.com/citations?hl=en&user=Yyaun24AAAAJ&view_op=list_works&sortby=pubdate). --- Reply to Comment 1.1.1: Comment: We highly appreciate your interest in this proof and we apologize for accidentally omitting the reference for it. **Reference:** The dimension reduction argument is stated in Proposition 2 in [Mityagin 2020]. [Mityagin 2020] Mityagin, B.S. The Zero Set of a Real Analytic Function. Math Notes 107, 529–530 (2020). https://doi.org/10.1134/S0001434620030189 **Context regarding Proposition 2:** Proposition 2 states the Hausedorff dimension of the zero set of a non-zero analytic function is deficient. The Hausedorff dimension of a manifold is always greater or equal to its topological dimension (the dimension of the manifold), see Theorem 6.3.10 in [Edgar 2007], thus this reduction applies to the dimension of the manifold we defined in Equation 11, Theorem A.2 in our manuscript. **References:** [Edgar 2007] Edgar G. Measure, Topology, and Fractal Geometry. Undergraduate Texts in Mathematics. New York: Springer; 2007. [Mityagin 2020] Mityagin, B.S. The Zero Set of a Real Analytic Function. Math Notes 107, 529–530 (2020). https://doi.org/10.1134/S0001434620030189
Summary: This paper seeks to theoretically demonstrate the complete determination of point clouds, up to permutation and rigid motion. The authors formulate a Euclidean variant of the 2-WL test, effectively illustrating the separation capacity of the Euclidean Graph Neural Network on highly symmetrical point clouds. Strengths: 1. The paper delivers a theoretical exploration of point cloud completeness. 2. It discusses the separation capability of the Euclidean Graph Neural Network in high-dimensional representations. Weaknesses: 1. In appendix Line 564, what does $(\star)$ stand for? 2. Does the proposed method risk confounding reflection equivariance? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. Below are our responses to your questions/concerns. **Syntax:** $(\star)$ stands for “they both have rank r, and $x_i \neq x_j$”. Then we refer to this fact in the proceeding sentence. To improve clarity, we will replace (*) with “Due to the fact that they both have rank r, and $x_i \neq x_j$,”. We thank the reviewer for this point. **Regarding Reflection Equivariance:** Our proposed method can accommodate both rotation invariance as well as simultaneous rotation and reflection invariance. In particular, the SEWL test is invariant to rotations but not reflections, and the EWL test is invariant to both rotations and reflections. Please let us know if we adequately addressed your question. On another note, [Villar et al.] characterize rotation equivariant functions via rotation invariant functions. Thus, results in this manuscript may be used to obtain rotation equivariant universal models. This is an active direction for future work. [Villar et al.] Villar, Soledad, et al. "Scalars are universal: Equivariant machine learning, structured like classical physics." Advances in Neural Information Processing Systems 34 (2021): 28848-28863.
Rebuttal 1: Rebuttal: **Response to All Reviewers** We would like to thank the reviewers for their helpful remarks and detailed feedback, which we have read carefully. We were glad to see the reviewers recognized our novel theoretical contribution. Yet, we feel that we did not convey the significance of our results in the context of separation and $k$-WL tests. Let us attempt to remedy this: Separation is a desired property for machine-learning algorithms, which bears both theoretical and practical importance. For example, neural networks that cannot separate point clouds are provably $\textit{not}$ universal approximators of continuous point-cloud functions. This theoretical weakness may lead to hinderence of performance in real-world tasks such as molecular property prediction [Pozdnyakov et al. 2022] and regression on social-network graphs [Chen et al.]. Many recent popular architectures for point clouds are based on $k$-WL tests [Morris et al.], [Feng et al.], [Maron et al.]. Yet, prior to our work, no such architecture was proven to separate point clouds. In this work we show for the first time that any architecture that is as expressive as 3-WL is provably separating. In addition, we propose a novel variant of 2-WL that is also provably separating, while being more computationally efficient than 3-WL. Regarding our experiments, the main focus of our work is theoretical. As such, its aim is not to propose a practical algorithm, but rather to answer a long-standing theoretical question regarding the separation capability of WL-based architectures. For this purpose we used synthetic experiments, as we believe that they are the most suitable means to validate our theoretical results. We note that this is common practice, as many theoretical papers on this topic have been published in NeurIPS based on the strength of their theory, e.g. [Joshi et al., ICML 2023], [Villar et al., NeurIPS 2021], [Dym-Maron, ICLR 2021], [Wagstaff, ICML 2019], [Aamand, NeurIPS 2022]. Such theoretical results often proved valuable in the subsequent development of practical methods. For instance, [Dym-Maron, ICLR 2021] has inspired GemNet [Gasteiger et al.], a widely used GNN. To conclude, we believe that the novel theoretical results presented in this manuscript will be of interest to the research community, and may lay the foundations for further theoretical as well as practical research. Below we provide several quotes from recent papers to establish our claim: “Proposition 2 states that this architecture [Invariant Graph Network (IGN) applied to Gram matrices] universally approximates O(d) invariant and permutation equivariant functions. The full approximation power requires high order tensors to be used for the IGN; in practice, we restrict the tensor dimensions for efficiency …” [Lim, ICLR 2023] “... provably universal equivariant frameworks are such in the limit in which they generate high-order correlations… It is an interesting, and open, question whether a given order suffices to guarantee complete resolving power.” [Pozdnyakov, MLST 2022] “... an interesting open problem is understanding whether universality can be achieved using only low-dimensional representations.” [Dym, ICLR 2021] **References** [Aamand, NeurIPS 2022] Aamand, Anders, et al. "Exponentially improving the complexity of simulating the Weisfeiler-Lehman test with graph neural networks." Advances in Neural Information Processing Systems 35 (2022). [Villar, NeurIPS 2021] Villar, Soledad, et al. "Scalars are universal: Equivariant machine learning, structured like classical physics." Advances in Neural Information Processing Systems 34 (2021). [Wagstaff, ICML 2019] Wagstaff, Edward, et al. "On the limitations of representing functions on sets." International Conference on Machine Learning. PMLR, 2019. [Dym, ICLR 2021] Nadav Dym and Haggai Maron. “On the Universality of Rotation Equivariant Point Cloud Networks” International Conference on Learning Representations (ICLR), 2021 [Lim, ICLR 2023] Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelk “Sign and Basis Invariant Networks for Spectral Graph Representation Learning.” International Conference on Learning Representations (ICLR 2023) [Pozdnyakov, MLST 2022] Pozdnyakov, Sergey N., and Michele Ceriotti. "Incompleteness of graph neural networks for points clouds in three dimensions." Machine Learning: Science and Technology 3.4 (2022). [Geerts et al.] Geerts, F., & Reutter, J. L. (2022). _Expressiveness and approximation properties of graph neural networks. arXiv_ preprint arXiv:2204.04661. [Chen et al.] Chen, Z., Villar, S., Chen, L., & Bruna, J. (2019). On the equivalence between graph isomorphism testing and function approximation with gnns. _Advances in neural information processing systems_ , 32. [Pozdnyakov et al.] Pozdnyakov, S. N., & Ceriotti, M. (2022). _Incompleteness of graph convolutional neural networks for points clouds in three dimensions. arXiv_ e-prints, arXiv-2201. [Morris et al.] Morris, C., Lipman, Y., Maron, H., Rieck, B., Kriege, N. M., Grohe, M., Fey, M., and Borgwardt, K. _Weisfeiler and leman go machine learning: The story so far. arXiv_ preprint arXiv:2112.09992, 2021. [Feng et al.] Feng, Jiarui, et al. _Towards Arbitrarily Expressive GNNs in $ O (n^ 2) $ Space by Rethinking Folklore Weisfeiler-Lehman. arXiv_ preprint arXiv:2306.03266 (2023). [Gasteiger et al. ] Gasteiger, Johannes, Florian Becker, and Stephan Günnemann. "Gemnet: Universal directional graph neural networks for molecules." Advances in Neural Information Processing Systems 34 (2021). [Maron et al.] Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. Advances in neural information processing systems , 32, 2019.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Extremal Domain Translation with Neural Optimal Transport
Accept (poster)
Summary: This paper presents a novel OT problem, extremal transport (ET), in the context of domain translation. The authors propose an incomplete transport (IT) problem as a surrogate optimization problem to obtain an approximate solution to the ET problem. The theoretical convergence between IT and ET costs (plans) is proven conditionally. The effectiveness of ET and IT is showcased through experiments on toy examples and public datasets, while also demonstrating the relationship between ET and IT. Strengths: - The optimal transport formulations in this work, i.e., ET and IT, are something that I have not seen. These formulations, for me, are inspiring and interesting. - The paper provides a thorough discussion on the technical differences and relationship with other existing methods. - The overall presentation of the main paper is generally clear. Furthermore, the full paper, including the appendices, appears to be comprehensive and well-prepared. The theoretical results are presented in a comprehensive and self-contained manner. Weaknesses: - The scope of and motivation for the work are a little unclear from the abstract and introduction. In particular, the motivation to incorporate ET in the context of domain translation is not clear, which could make readers confused. It should be highlighted at least in the abstract and introduction sections. - Certain statements lack sufficient explanations or supporting intuitions, which should be explained and provided in the main text. To list a few: - In the lines 85-91, a variant of OT problem is established. The authors mentioned "we say that the target domain is the part of Y where the probability mass of Q lives". However, the intuition behind this statement is not well-presented and should be elaborated upon. - In the lines 76-78, the intuition to establish the ET framework in the context of domain translation should be demonstrated. It should also be highlighted in the abstract and introduction sections. Specifically, readers may hope to see why it is essential to construct ET for domain translation task, given thorough related works on weak OT, partial OT, etc... Although you have formulated the technical difference with them, I hope to see analysis on "why ET outperforms them in your application". - In the lines 91-92, the authors relaxes the mass preservation condition for the target domain. Why not relax the condition for both the target and source domain, likewise UOT? A concise and simple comparison with current weak / partial OT should be clarified here, to **explain the advantages of ET in domain translation compared with peer methods, (rather than purely formulating the technical differences)**, which is critical for identifying and evaluating ET's role in domain translation.** ### ************ Minors *************** - Figure 3-4 are kind of redundant and largely inherit from current literatures. As such, it is not appropriate to take too much blank space for them. A more proper way would be consendate them in one line or move them to the appendix. - In the line 104, the NN could lead to unexpected confusion since it usually represents neural networks. - In the lines 80-109, the authors establish an optimization problem to seek for the nearest neighbors of $x\in\mathcal{X}$ in Supp(Q). The narrative in this section is kind of redundant. - It is essential for readers to acknowledge the technical differences with other OT methods. As such, I suggest authors moving the section of "Related works" after the section of "background on optimal transport", and highlight your technical difference with each genre of OT methods concisely. My current score is a reflection of the weaknesses above as they stand. Since these seem like rather correctable issues, I expect to increase my score pending a positive response from the authors. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The authors noted "the task is ill-posed as there might exist multiple suitable T." It is confusing for me due to two reasons: (1) you just mention that there could be multiple maps between source and target; however, the optimization problem has not been formulated here, and it is not proper to say it is ill-posed problem since the word ill-posed seems to be suitable for optimization problems; (2) the optimization problem, such as the Kantorovich problem, could be not ill-posed in some cases given certain metrics, is it right? Please consider to improve the narrative to make it more rigorous and self-contained. - The authors noted "challenge here is that the correspondence between vailable data samples a from the source and y from target domains is not given." What if the correspondence is given? Can we get a new optimization problem with intriguing properties? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your comments. Here are the answers to your questions. **(1) The motivation to incorporate ET in the context of domain translation is not clear <...> in the abstract and introduction.** The motivation to incorporate ET in the context of domain translation is implicitly stated in lines 25-31 of the introduction. Indeed, ET allows to perform domain translation while achieving best possible similarity between source and generated images. Such property is needed in various particular downstream image-to-image problems including, e.g., image inpainting or super-resolution (lines 222-225). *We will state the motivation more clearly in the abstract and introduction in the final version of our paper.* **(2) The intuition behind "we say that the target domain is the part of $\mathcal{Y}$ where the probability mass of $\mathbb{Q}$ lives" (lines 85-91) is not clear.** Consider a typical example $\mathcal{Y}=[-1,1]^{D}$. Following the standard manifold hypothesis [1], real data distribution $\mathbb{Q}$ (e.g., distribution of images of faces) is usually supported on a small-dimensional manifold $M=$ Supp$(\mathbb{Q})\subset [-1,1]^{D}$ occupying a tiny part of the ambient space $[-1,1]^{D}$. Thus, the intuition behind the mentioned statement is that the probability distribution $\mathbb{Q}$ lives on this manifold $M$ which represents a target domain. *We will add clarifications to the final version.* **(3) Demonstrate the intuition to establish the ET framework in the context of domain translation. Why it is essential to construct ET for domain translation task, given thorough related works on weak OT, partial OT, etc... <...> "why ET outperforms them".** Our experiments with domain translation do not pretend to show that ET construction is essential to solve domain translation task, but only demonstrates that proposed numerical algorithm are applicable to the problem with real data. In particular, it can recover very good similarity of translated samples to input samples which may be of high importance in certain image-to-image translation tasks (lines 222-233) while existing (GAN-based) methods encounter their limitations (Appendix). To the best of our knowledge, there are also no works that apply weak OT or partial OT to solve the same problem of ET (nearest neighbours computation with the out-of-sample generalization). **(4) In the lines 91-92, the authors relaxes the mass preservation condition for the target domain. Why not relax the condition for both the target and source domain, likewise UOT?** In a task of domain translation, we need to translate the sample from the source domain to the target one. Relaxing the condition for target domain helps us to achieve the 'best possible similarity' between source and generated images which is a primary goal of our IT approach. At the same time, relaxing source distribution is counter-intuitive for domain translation task where all the **test** samples will come from the source distribution $\mathbb{P}$. **(5a) The NN abbreviation is confusing.** We agree with the reviewer that NN abbreviation is commonly employed to denote neural networks. At the same time, it is also common abbreviation for nearest neighbors, e.g., like in $k$-NN. To avoid misunderstanding, we explain the abbreviation several times in the paper, see line 32 and 104-106. **(5b) In the lines 80-109, the authors establish an optimization problem to seek for the nearest neighbors of x$\in$X in Supp(Q). The narrative in this section is kind of redundant.** We kindly ask the reviewer to specify which parts of the section seem to be redundant. We will be happy to improve the section narrative. **(5c) Acknowledge the technical differences with each genre of OT methods. Move the "Related works" section after "background on OT".** We highlight technical differences with conventional neural OT methods in Appendix F (lines 906-910), discrete partial OT methods in Appendix D (lines 650-657), discrete unbalanced OT in Appendix B (lines 554-560). Unfortunately, we could not move related work section to the beginning of the paper, since it uses some ET/IT notions and properties which are introduced in prior sections. *We will move the technical differences with other OT methods from Appendix to the related work section.* **(6) The statement "the task is ill-posed as there might exist multiple suitable T" is confusing. <...> the Kantorovich problem, could be not ill-posed in some cases given certain metrics, is it right?** (a) We agree with the reviewer's comment that maybe 'ill-posed' is not an appropriate designation for the problem before it is strictly mathematically formulated. *We will change the word 'ill-posed' to 'ambiguous' in line 15 in the final version of our paper.* (b) You are correct in your understanding that Kantorovich problem is not an ill-posed problem. **(7) What if the correspondence between available data samples from the source and y from target domains is given?** If the correspondence is given, i.e., **paired** samples $(x_n,y_n)$ are available for training, the problem can be straightforwardly solved by common supervised learning techniques such as Pix2Pix [2]. This setup is out of the scope of our paper. We consider the **unpaired** setup where the is no supervision. **Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score. **References.** [1] Fefferman, C. et al. Testing the manifold hypothesis. [2] Isola, P. et al. Image-to-image translation with conditional adversarial networks. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you very much for your detailed response. The author response has mitigated my concerns, especially (1-4). I believe the authors could consider my suggestions in preparing their final version. After reading the comments from other reviewers and author responses, I am pleased to adjust my initial rating to support acceptance of this work more definitely.
Summary: The paper proposes extremal transport (ET), a mathematical framework for achieving the best possible unpaired translation between two domains based on a given similarity function, and proving that ET maps can be learned as a limit of specific partial optimal transport (OT) problem. The contributions of the paper include a formalization of ET as a rigorous mathematical task, the characterization of ET maps and plans through a connection to nearest neighbors, and the derivation of an efficient computational algorithm based on partial optimal transport. The proposed algorithm is evaluated using 2D examples and the unpaired image-to-image translation task. Strengths: 1.The logic of this paper is well organized, and also the mathematical formulas are well defined. 2.The work demonstrates that it is possible to relax equality constraints to inequality constraints especially for some situations where enforcing rigorous equal constraints are difficult, and proposes incomplete transport (IT) to approximate ET maps using partial optimal transport (OT) problems, which is very novel. 3.The experimental part is quite sufficient complete and proves the effectiveness of the proposed method. Weaknesses: It is not clarified the motivations of defining ET problem and the relationship between specific task such as unpaired image translation and the mathematical ET problem.The process from abstraction of practical problems to theoretical derivation is too steep. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why the ET problem is defined in the compact polish spaces?Could it transfer to other spaces? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The ℓ2 and FID metrics have some limitations in characterizing the effectiveness of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your comments. Here are the answers to your questions. **(1) It is not clarified the motivations of defining ET problem and the relationship between specific task such as unpaired image translation and the mathematical ET problem.The process from abstraction of practical problems to theoretical derivation is too steep.** We thank reviewer for pointing this. Our choice to consider unpaired image translation problem in the context of ET was motivated by the fact that recovering good similarity of translated samples to input samples may be of high importance in certain image-to-image translation tasks (lines 222-233). **(2) Why the ET problem is defined in the compact Polish spaces. Could it transfer to other spaces?** **Compactness.** We set the spaces $\mathcal{X}$ and $\mathcal{Y}$ to be compact as it is a natural property which holds for various real-world distributions, e.g., images. At the same time, the compactness assumption notably shortens the derivation of the theoretical results, e.g., $\inf$/$\sup$ can be automatically replaced by $\min$/$\max$ everywhere simplifying derivations, etc. We think that most of our results can be generalized to non-compact spaces but we leave this for future theoretical studies. **Polish spaces.** Recall that a Polish space is a *separable completely metrizable topological space* (and $\mathbb{R}^{D}$ is an example). Its metrizable property yields the equivalence between compactness and sequential compactness which is simpler to work with (we use it, e.g., in our Theorem 3). Its separability is required in Banach-Alaouglu theorem which we use in our Proposition 2. Completeness holds automatically due to the compactness assumption. **(3) The $\ell^2$ and FID metrics have some limitations in characterizing the effectiveness of the method.** We use the FID metric simply because there are no principally different alternatives. In fact, all the metrics for generative models which we have seen in the related papers evaluate set-to-set similarity rather than the quality of individual samples. If you can suggest popular metrics based on individual images, we are happy to include them to our evaluation. Regarding the $\ell^2$ metric, we used as a cost fuction during trining of our models and, as a consequence, employ it to evaluate the similarity between input and generated images. **Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.
Summary: This paper introduces a mathematical formalization called "extremal transport" that aims to achieve optimal translation between unpaired domains based on a given similarity function. Additionally, the paper proposes a scalable algorithm that utilizes neural optimal transport to approximate extremal transport mappings. The algorithm is tested on toy examples and the image-to-image translation task, yielding promising results. Strengths: The paper introduces a novel mathematical formalization called extremal transport for achieving optimal translation between unpaired domains. Additionally, the concept of neural optimal transport is introduced and applied in the algorithm. These theoretical foundations provide a solid basis for the algorithm design. The algorithm is tested on toy examples and the image-to-image translation task, demonstrating good results. These experiments validate the effectiveness and scalability of the algorithm. Image-to-image translation is an important problem in computer vision, and the proposed algorithm can be applied to address this problem. Furthermore, the paper mentions other potential application areas, such as single-cell data analysis in biology. The algorithm proposed in this paper can be widely applied to solve translation problems between unpaired domains, making it highly generalizable. Additionally, a new method called partial optimal transport is introduced, which can be used to address alignment problems between imbalanced measures, further expanding its potential for generalization. Weaknesses: Based on the information provided, the paper primarily focuses on the common main features of existing approaches rather than encompassing all of them. It's possible that due to space constraints, some concepts or details may not be extensively elaborated in the main paper. However, substantial explanations are provided in the Appendix. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In lines 121-124, how the lower bound is defined during the process of analyzing and constructing upper and lower bounds to simplify the objective is not mentioned. The reason why value(8) admits more minimizers is also not explained. In lines 133-136, an explanation is needed for the replacement mentioned here and why using a finite parameter w can achieve the desired replacement of Supp(T#P) belonging to Supp(Q). In lines 218-220, it is asked whether the method presented in the paper scales to high dimensions. If images are defined as high-dimensional, this should be clarified from the beginning of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper has conducted extensive discussions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your comments. Here are the answers to your questions. **(1a) In lines 121-124, how the lower bound is defined during the process of analyzing and constructing upper and lower bounds to simplify the objective is not mentioned.** Equation (10) states that for any $\pi\in\Pi^{\infty}(\mathbb{P},\mathbb{Q})$ it holds (for $\mathbb{P}$-almost all $x\in\mathcal{X}$) that: $c^{\*}(x)=\min_{y\in Supp(\mathbb{Q})}c(x,y)\leq \int_{\mathcal{Y}}c(x,y)d\pi(y|x)$. If we now integrate the equation (10) with respect to $x\sim\mathbb{P}=\pi_{x}$ and take $\inf$ over all admissible plans, we get the following inequality: $\int_{\mathcal{X}}c^{\*}(x)d\mathbb{P}(x)\leq \inf_{\pi\in\Pi^{\infty}(\mathbb{P},\mathbb{Q})}\int_{\mathcal{X}\times\mathcal{Y}}c(x,y)d\pi(x,y)$. It shows that indeed $\int_{\mathcal{X}}c^{\*}(x)d\mathbb{P}(x)$ is a lower bound for equation (8). At the same time, as we write in lines 102-103, $\int_{\mathcal{X}}c^{\*}(x)d\mathbb{P}(x)$ is also a lower bound for equation (5). From our Theorem 1 we see that there exists measurable map $T^\*$ minimizing (5) such that the bound is tight. It automatically yields existence of a deterministic plan $\pi^\*(y|x)=\delta_{T^\*(x)}$ such that (8)=$\int_{\mathcal{X}}c^{\*}(x)d\mathbb{P}(x)$=(5). *We will add these details to the final version of our paper.* **(1b) The reason why value (8) admits more minimizers is also not explained.** This fact follows from the definitions of Monge's and Kantorovich's Extremal Transport (ET) formulations. Indeed, like in the general OT theory, Kantorovich's ET formulation (8) is an extension of Monge's one (5). If there exists a minimizer $T^\*$ of equation (5), then it yields the corresponding deterministic minimizer $\pi^\*(y|x)=\delta_{T^\*(x)}$ of (8). However, Kantorovich's formulation allows to split the mass between nearest neighbors and, therefore, may potentially admit more minimizers. *Example.* Consider $\mathcal{X}=\mathcal{Y}=[-1,1]$. Let $\mathbb{P}=\delta_0$ and $\mathbb{Q}=\frac{1}{2}\delta_{-1}+\frac{1}{2}\delta_1$ be distributions concentrated at $\{0\}$ and $\{-1, 1\}$ respectively. Let $c(x, y)=\frac{1}{2}\|x-y\|^{2}$ be the quadratic cost. Then there are obviously two extremal OT maps $T^\*$ delivering minimum to Monge's ET problem: $T^\*(0)=-1$ and $T^\*(0)=1$. At the same time, there are infinitely many minimizers $\pi^{\*}$ of the Kantrorovich's problem (8). Indeed, all the plans distributing the mass of $\mathbb{P}$ between points $\{-1\}$, $\{1\}$, i.e., satisfying $\pi(0, -1)=m$, $\pi(0, 1)=1-m$ are minimizers for $m\in [0,1]$. **(2) In lines 133-136, an explanation is needed for the replacement mentioned here and why using a finite parameter $w$ can achieve the desired replacement of $Supp(T\sharp\mathbb{P})$ belonging to $Supp(\mathbb{Q})$.** For an arbitrary finite parameter $w$, the condition $Supp(T_{\sharp}\mathbb{P}) \subset Supp(\mathbb{Q})$ implies $T_{\sharp}\mathbb{P}\leq w \mathbb{Q}$. As we show in our paper, in the limit $w\rightarrow\infty$, the solutions to IT problem converge to the ET in a certain sense. At the same time, the replacement of ET with IT is desired as the latter one admits the dual formulation (which we derive) that can be efficiently solved with neural networks. How to directly enforce the ET condition $Supp(T_{\sharp}\mathbb{P}) \subset Supp(\mathbb{Q})$ in practice is an open problem. **(3) In lines 218-220, it is asked whether the method presented in the paper scales to high dimensions. If images are defined as high-dimensional, this should be clarified from the beginning of the paper.** Thank you for noting this aspect. *We will add the clarifications to the final version of our paper.* **Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase. --- Rebuttal Comment 1.1: Comment: The authors answered questions and will also add details. Their work is greatly appreciated.
Summary: This paper introduces the concept of extremal Optimal Transport (OT) and proposes the use of incomplete OT as a solution to the extremal OT problem. The authors present a duality method to address the incomplete OT problem and validate this approach using a toy 2D dataset and image translation tasks. Strengths: The authors offer the analysis of the proposed problems and algorithms, including the existence of Extremal Transport (ET), the presence of Incomplete Transport (IT), and the convergence of IT plans to the ET plans. They test their algorithms on several image translation datasets, demonstrating the ability to manipulate the similarity between the source image and the generated image by adjusting the weight parameter $w$. Their algorithm exhibits the capability to map the data towards a portion of the target distribution's support. Weaknesses: ## Motivation Firstly, the paper's logic doesn't fully convince me. The authors initially propose to solve the Extremal Transport (ET) problem, but then shift to solving the Incomplete Transport (IT) problem, presumably because the ET problem is too challenging. However, based on their experimental results, it seems more logical to propose solving the IT problem directly, as most of their visualizations demonstrate that adjusting the weight parameter $w$ controls the similarity between the source image and the generated image. If the authors are indeed intent on solving the ET problem, could they provide experimental evidence to support this? For instance, they could design some distributions that have a closed-form solution for ET, and then compare the ground truth minimum value of problem (5) with their simulation results. Regarding the motivation for problem (11): the authors state that "In practice, solving the extremal problem (5) is challenging because it is hard to enforce Supp($T \sharp P$) $\subset$ Supp($Q$)", but isn't this relatively straightforward for image tasks? Since images are typically scaled to a certain range like [-1,1], we can simply scale the images to meet this support requirement. I believe that the application of incomplete OT or extremal OT could be limited in image tasks, especially extremal OT, as multiple data points could be mapped to the same data point, resulting in generated images with limited diversity. ## Method Regarding the relaxation from problem 5 to 11, it seems to me that the constraint in 11 is not a softened version, but rather a stronger one. To satisfy the constraint in (11), one must also satisfy the constraint in (5). This is because in the constraint of (11), if Q=0, then regardless of the value of $w$, $T\sharp P$ must also be zero, which aligns with the constraint in (5). Why is it not feasible to design a dual formula for ET directly? The authors mention the issue of fake solutions. Could they consider borrowing ideas from the Kernel Neural OT paper? It seems that using a similar cost from Kernel Neural OT could enhance this paper, and it seems not that difficult? ## Results Given that the motivation of this paper is to solve the extremal OT problem, could the authors clarify under what conditions their method can recover the solution to extremal OT? Specifically, how large does the value of $w$ need to be? It's possible that this information is already included, and I may have overlooked it. The authors acknowledge that the FID is not particularly meaningful when $w>1$, yet it appears to be the primary metric used in this paper. If the authors believe that FID is not representative, why do they present so many FID results? They also claim that the image quality of the translated image does not decrease with increasing $w$. Is there a way to verify this quantitatively, perhaps with a metric based on the image itself rather than the image distribution? Some of the results presented in this paper seem less than satisfactory. For instance, in Figure 24, even when $w$ is large, some images fail to preserve hair colors. In Figure 22, as $w$ increases, some images exhibit unrealistic artifacts (e.g., the first grey shoe and the black saddle have strange bands on them), likely because the generated image is forced to share more similarity with the source image. Could this be a limitation of the proposed method? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Figure 5b may not be as illustrative as intended. In this case, it seems that pi would still uniquely correspond to a certain deterministic transport map. Perhaps it would be more instructive to provide an example where (8) admits multiple minimizers, such as a scenario where the support is not convex? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your comments. Here are the answers to your questions. **(1) Design distributions that have a closed-for ET solution. Compare simulation results with the ground truth.** We conduct *Swiss2Ball* experiment in 2D, see Fig. 1, Table 1 in **the attached PDF file**. Here the supports of source and target measures are partially overlapping and the solution to ET problem (5) has a closed form: $ T(x) = x \cdot 1_{x\in B((0,0), 0.5)} + x \cdot \frac{R}{\|x\|\_2} \cdot 1_{x\notin B((0,0), 0.5)} $, see Fig. (1a). We provide the learned IT maps for $w\in \\{1, \frac{3}{2}, 2, 32\\}$, see Fig. (1b-1e). The quantitative results show that with the increase of $w$ our IT maps become closer and closer to the grund-truth ET one (Table 1). **(2) Isn't "Supp$(T\sharp\mathbb{P}) \subset$ Supp$(\mathbb{Q})$" relatively straightforward for image tasks?** We use the standard mathematical definition of the support (lines 39-40), i.e., the support of a non-negative mesure $\mu$ is a closed set consisting of all points $x \in X$ for which every open neighbourhood $A\ni x$ satisfies $\mu(A)>0$. Following the standard manifold hypothesis [1], for a real data distribution $\mathbb{Q}$ (e.g., distribution of images of faces) and $\mathcal{Y}=[-1,1]^{D}$, one may think of Supp$(\mathbb{Q})\subset [-1,1]^{D}$ as of a small-dimensional manifold occupying a tiny part of the ambient space $[-1,1]^{D}$. In general, this manifold is unknown and we observe only some random data samples lying on it. This is the reason why enforcing the constraint Supp($T\sharp \mathbb{P}$) $\subset$ Supp($\mathbb{Q}$) is tricky in practice. **(3) The application of IT or ET could be limited in image tasks due to limited diversity issue.** We discussed this issue as a limitation of our method in Appendix A (lines 503-509). However, we emphasize that it is a dataset-dependent problem which may appear only in specific dataset cases like in our *texture* to *chair* translation example, see Fig. 9A. For clarity, we additionally demonstrated it on a specially designed toy example, see Fig. 10A. **(4.1) The constraint in (11) is not a softened version of (5).** Indeed, 'softened version' may be not the best formulation in this case since the constraint $T\sharp\mathbb{P}\leq \mathbb{Q}$ is indeed stronger. However, the constraint $T_{\sharp}\mathbb{P} \leq \mathbb{Q}$ is more computationally feasible in practice and our answer to your question below explains why. *We will change the word 'soften' in section 3.2.* **(4.2) Why is it not feasible to design a dual formula for ET directly?** Most of the duality formulas in OT field are derived using the ideas of the Largrange multiplier method. It is applicable to equality contraints such as $T_{\sharp}\mathbb{P}=\mathbb{Q}$ or inequality constraints like $T_{\sharp}\mathbb{P}\leq w \mathbb{Q}$. It is not clear how to incorporate the set inclusion constraints such as $Supp(T\sharp\mathbb{P})\subset Supp(\mathbb{Q})$. This is why we need the transition from ET to IT. **(5) You mention fake solutions issue. Consider using ideas from the Kernel NOT paper.** We perform additional *Ball2Circle* experiment, see **the attached PDF file**, demonstrating the effect of using kernel cost function for alleviating fake solutions issue. We learn IT maps for $c(x, y)=\|x-y\|_2$ with or w/o regularization for weights $w\in\\{1,2,32\\}$. Without regularization, we observe that the method is unstable (for $w\in\{1,2\}$), see Fig. 2b-e. In contrast, with kernel regularization (+stochastic map $T(x,z)$) [2] the method always converges, see Fig. 3. Our example shows (a) fake solutions may be a problem and (b) regularization may help to deal with them. Further studying this aspect is out of the scope of the paper. **(6) Under which conditions the method can recover the ET solution? How large does the value of w need to be?** If we correctly understand, you ask for a concrete convergence rate of IT plans $\pi^{w}$ to ET plans $\pi^{*}$ as a function of $w$. We do not specify the rate of convergence and leave this aspect for future studies (see lines 168-170). **(7) FID and metrics based on images itself.** We use the FID metric simply because there are no principally different alternatives. In fact, all the metrics for generative models which we have seen evaluate set-to-set similarity rather than the quality of individual samples. Moreover, in the *unpaired* translation task the evaluation based on individual images (e.g., paired metrics) seems to be not applicable. **(8) In Figure 24, some images fail to preserve hair colors. In Figure 22, some images exhibit unrealistic artifacts. Could this be a limitation?** We agree with the reviewer that in Figure 24, the hair colors are not preserved. However, we need to explain that it is the **expected behaviour** since we aim to solve the problem of *'mapping to the nearest neighbor in a target dataset'*. Thus, if this target dataset does not contain samples with the same hair color as in input image, our model is not intented and should not keep the color. Regarding artifacts in Figure 22, we note that they occur since the dataset is rather challenging. **(9) Figure 5b may not be as illustrative as intended. <...> Provide an example where (8) admits multiple minimizers, e.g. when support is not convex?** *We will replace the picture in the final version of our paper with the new one where $\mathbb{Q}$'s support is non-convex.* **Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score. **References.** [1] Fefferman, et al. (2016). Testing the manifold hypothesis. [2] Korotin, A. et al. (2022). Kernel neural optimal transport. --- Rebuttal Comment 1.1: Comment: Dear reviewer Mz3K, does the author's rebuttal address your concerns? In particular, can you comment on whether your concerns on formulation and method are addressed? Do you have further comments/questions for the authors? --- Rebuttal Comment 1.2: Comment: Thank you for your reply! I have some follow-up questions. (6) "We do not specify the rate of convergence and leave this aspect for future studies (see lines 168-170)." Can you discuss this even empirically? For example, for those face image style transfer and handbag <-> shoes datasets, what range is enough? (7) FID and metrics based on images itself. Could you try to add the following baseline? For each image in the source dataset, determine its closest match in the target dataset using a 1-nearest-neighbor approach. This method will allow you to create a new dataset. Then, calculate the FID concerning this newly established dataset instead of the initial target dataset. I foresee a reduction in the FID value as $w$ increases. --- Reply to Comment 1.2.1: Title: Additional response Comment: Dear Reviewer Mz3K, please find the answers to your follow-up questions below. **(1) Convergence rate: empirically, which values of $w$ are enough to make IT maps close to ET?** We may use the $\ell^2$ transport cost of learned IT maps to determine $w$ for which IT maps become close enough to ET. Indeed, after a certain value of $w$ it is expected that $\mbox{Cost}_{w}$ will stop rapidly changing. Hence, intuitively, one may expect that IT map is close enough to ET as well. For *celeba*$\rightarrow$*anime* case, the $\ell^{2}$ cost decreases rapidly for weights $w\in\\{1,2,4,8\\}$, see Table 1(a) in our paper. However, as we discussed in Appendix G.2, further increase of the weight (there we tested additional weights $w\in\\{16,32\\}$) does not lead to any significant cost decrease. Therefore, for this pair of datasets, $w=8$ can be considered as a sufficient value to get the IT map which is close enough to ET. In *handbag*$\rightarrow$*shoes* experiment, the difference in $\ell^2$ cost between $w=4$ and $w=8$ in Table 1(a) seems insignificant. Thus, in this case, for $w=4$ the IT map may be treated as close enough to some ET map. **(2) FID for the 1-nearest-neighbors.** To address the reviewer's question, we calculate FID values between our IT maps and 1-nearest-neighbors of input samples in target dataset (test parts) for *celeba*$\rightarrow$*anime* and *handbag*$\rightarrow$*shoes* experiments, see the Table below. | | $w=1$ | $w=2$ | $w=4$ | $w=8$ | |---------------------------------------------|-------|-------|-------|-------| | *celeba*$\rightarrow$*anime* | 53.21 | 44.81 | 39.77 | 43.03 | | *handbag*$\rightarrow$*shoes* | 73.35 | 68.31 | 73.44 | 80.61 | We see that there is no obvious dependence between the weight $w$ and calculated FID values. This is quite expected since the comparison with discrete nearest neighbors is irrelevant in our case. Indeed, in our paper we show that minimizer $T^*$ of ET problem (which we seek for when $w\rightarrow\infty$) maps each point $x\sim\mathbb{P}$ to its nearest neighbors *in the support* (Supp$(\mathbb{Q})$) of the target distribution, see lines 104-113. However, nearest neighbors in the **empirical** dataset are significantly **biased** over the desired nearest neighbors in the support. The fact that the **empirical nearest neighbors are a poor replacement of the true nearest neighbors** (in the support) was already indirectly illustrated in the Appendix D of our paper. There we regressed (distilled) a neural network $T_{\theta}$ to predict discrete (empirical) nearest neighbors in the train dataset. From Figure 18 we can make two valuable conclusions: - While the learned network generates high-quality images on the training dataset (Figure 18(b), $w=\infty$), it struggles to generalize well to the unseen test samples (Figure 18(a), $w=\infty$). - The empirical nearest neighbors may not have sufficient similarity with the input images. This can be seen from Figure 18(b) (the 1st and last lines). Despite the fact that the target empirical nearest neighbors are not shown there, our trained network ($w=\infty$) almost perfectly reproduces them (the train loss was almost zero in that experiment). Hence, the images in the last line can be viewed as the empirical nearest neighbors to the images on the first line. To conclude, FID values estimated by using the empirical replacement of nearest neighbors can not be considered as the representative metric.
Rebuttal 1: Rebuttal: Dear reviewers, thank you for your thorough and detailed feedback! We are highly inspired by the fact that you agree on the novelty of the proposed Extremal and Incomplete Transport (ET/IT) formulations (Reviewers iXGC, URfJ, Cqh5, 3fqr), find our theoretical results to be valuable (Reviewer iXGC, 3fqr), acknowledge that our algorithm is widely applicable (Reviewer URfJ), and its effectiveness is proved by complete and sufficient experimental evaluation (Reviewers URfJ, Cqh5). We are glad that you positively highlight clear presentation and comprehensiveness (Reviewers iXGC, 3fqr, Cqh5) of our paper. We hope that our IT algorithm would be easy to use in practical applications. We will incorporate the changes suggested by the reviewers in the final version of our paper. We list the changes below: (a) Main text (**minors**) $-$ replacement of the Figures (3-5) by those where the support is non-convex (Reviewers Mz3K, 3fqr) plus minor requested clarifications here-and-there; (b) New **Appendix** section$-$ additional experiments (Reviewer Mz3K): *Swiss2Ball* experiment where ET maps have *closed-form*, *Ball2Circle* experiment testing advanced version of our method with weak kernel cost; (c) **Addition** to Appendix F $-$ small corollary showing the closeness of IT problem solution to the set of ET plans (Reviewers iXGC). Please find Figures for experiments requested by reviewer Mz3K in the **attached PDF file**. Please find the answers to your questions below. Pdf: /pdf/b68ee70760a06ac073209994c04163749aa88cc8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a novel notion of extremal transport, which relaxes the optimal transport problem by only requiring the support of the pushed-forward distribution to be a subset of the support of the target distribution. To solve this problem, the authors propose a novel approximation approach to find the solution from a subsequence of a series of incomplete OT problems (where T#P = Q is relaxed to T#P < wQ, w >= 1, when w=1 this is the traditional OT problem). Then the author derives novel dual formulations of the incomplete OT problems, which the author proposes to solve by using neural networks to approximate the potential and c-transform solutions. The authors apply the method to toy sets and image translation examples to demonstrate the scalability of the method. FID results on the image translation example prove the concept of the method. The authors carefully compare their results with related works and establish their theoretic novelty. The presentation of the results are clear and potential impacts are well discussed. Strengths: The paper proves several strong theoretical results on the relaxed versions of OT problems (i.e. ET and IT). The theoretic study of the solutions to these problems is fruitful. This opens up a new domain of study area in the optimal transport area. Weaknesses: Overall the theoretic results are rigorous and novel. To solve the solution of the dual formulation of IT, the authors propose to use neural networks to approximate the transport map (necessary for solving large-scale problems). One concern I have is that in the traditional OT problem, the OT map is in general discontinuous. This proposes difficulties in using DNN to approximate the OT map. It would be very welcomed if results on the regularity of the IT maps are presented, and/or how well the neural network can approximate such maps. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: One question: in theorem 3, it only guarantees the existence of a subsequence. In practice, if we found a sequence of solutions by Alg. 1, how can we make sure whether it is the desired ET solution? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitation of the paper is adequately addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your comments. Here are the answers to your questions. **(1) In the traditional OT problem, the OT map is in general discontinuous. This proposes difficulties in using DNN to approximate the OT map. It would be very welcomed if results on the regularity of the IT maps are presented, and/or how well the neural network can approximate such maps.** **Regularity.** In the proof of Proposition 3 (Appendix F, line 772), we derive an auxiliary statement that if $\pi^* \in\Pi^w(\mathbb{P},\mathbb{Q})$ is an IT plan between $\mathbb{P}$ and $\mathbb{Q}$, then it is an OT plan between $\mathbb{P}$ and $\pi_{y}^{\*}$. This also leads to the fact that IT maps $T^{*}$ between $\mathbb{P}$ and $\mathbb{Q}$ are the OT maps between $\mathbb{P}$ and $T^{\*}_{\sharp}\mathbb{P}$. Hence, it seems like the results on their regularity may be potentially derived from the general regularity properties of OT maps, see [1], but this requires further studies. **DNN approximation.** Fortunately, from the practical point of view, we can show that IT maps **can** be approximated with neural networks. As it is shown in [2, Theorem 1], assuming that the target distribution has finite second moment, neural networks can arbitrary well approximate the OT map (and hence our IT map) w.r.t. the $\mathcal{L}^2(\mathbb{P})$ norm. The authors provided a concise proof for stochastic OT maps but there are no significant changes when we turn to the deterministic case. To conclude, neural networks can arbitrarily well approximate IT maps. **(2) In theorem 3, it only guarantees the existence of a subsequence. In practice, if we found a sequence of solutions by Alg. 1, how can we make sure whether it is the desired ET solution?** Indeed, different sub-sequences might converge to different ET plans and whole sequence might be non-converging itself. To address your question, we provide a short corollary from our Theorem 3 showing that with the increase of weight $w$, elements of any such sequence become closer to **set** of ET plans. *Corollary.* $\forall\varepsilon>0$ $ \exists w(\varepsilon)\in[1,\infty)$ such that $\forall w \geq w(\varepsilon)$ and $\forall$ IT plan $\pi^w\in\Pi_w(\mathbb{P}, \mathbb{Q})$ solving Kantorovich's IT problem (equation (12) in our paper), there exists a ET plan $\pi^\*$ which is $\varepsilon$-close to $\pi^w$ in $\mathbb{W}_1$, i.e., $\mathbb{W}_1(\pi^\*, \pi^w)\leq \varepsilon$. *Proof.* Assume the inverse. Then $\exists \varepsilon$ such that $\forall w(\varepsilon)$ $\exists w \geq w(\varepsilon)$ and $\exists$ IT plan $\pi^w\in \Pi_w(\mathbb{P}, \mathbb{Q})$ solving (12) such that $\forall$ ET plan $\pi^\*$, it holds that $\mathbb{W}\_1 (\pi^w, \pi^\*)\geq \varepsilon$. Pick a sequence $w_1, w_2, ..., w_n \rightarrow \infty$ and the corresponding sequence of IT plans $\pi^{w_1}, \pi^{w_2}, ..., \pi^{w_n}$. From Theorem 3, it has a sub-sequence (weakly-*) converging to some ET plan $\pi^\*$: $\pi^{w^{n_k}}\rightarrow \pi^\*$. However, $\forall n_k$ $\exists w \geq w_{n_k}$, such that $\mathbb{W}\_1(\pi^{w^{n_k}}, \pi^{\*}) \geq \mathbb{W}\_1(\pi^{w}, \pi^{\*}) \geq \varepsilon$. Hence, the sub-sequence does not converge to $\pi^{\*}$ in $\mathbb{W}\_{1}$. Recall that the convergence in $\mathbb{W}\_{1}$ coicides with the weak-$\*$ convergence (for compact $\mathcal{X},\mathcal{Y}$), see [3, Theorem 5.9]. Hence, the subsequence also does not weakly-* converge to $\pi^{\*}$ which is a contradiction. $\square$ **Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score. **References.** [1] De Philippis, G., & Figalli, A. (2015). Partial regularity for optimal transport maps. Publications mathématiques de l'IHÉS, 121(1), 81-112. [2] Korotin, A., Selikhanovych, D., & Burnaev, E. (2022, September). Neural Optimal Transport. In The Eleventh International Conference on Learning Representations. [3] Santambrogio, F. (2015). Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63), 94.
null
null
null
null
null
null
ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning
Accept (poster)
Summary: This paper considers how to best use auxiliary tasks to improve performance on target tasks. Specifically, a "ForkMerge" procedure is proposed which consists of two parallel optimization procedures, one on the target task, and one which includes auxiliary data, and the resulting weights are synchronized at regular intervals. Compared to grid searching for an appropriate interpolation factor, the proposed approach can dynamically alter the interpolation between the two sets of model parameters. Experiments on diverse set of benchmarks and a wide array of baselines show that the proposed approach is effective at improving performance using auxiliary tasks, relative to the state of the art. Strengths: * This paper tackles a difficult problem of improving target task performance using auxiliary data. * The approach is well-motivated and the experiments are quite convincing. * The justification in terms of reducing distribution shift relative to test data is quite interesting. Weaknesses: * The task selection with multiple auxiliary tasks is computationally expensive, and the impact of the pruning procedure is not completely clear. * There is no adequate discussion of limitations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer ECtn for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** Impact of pruning strategy. $\text{Table 4}$ illustrates the impact of the pruning strategy of ForkMerge. As the number of branches increases, the cumulative benefits derived from auxiliary tasks become more pronounced. However, while the overall gain increases with the number of branches, the individual gain per branch tends to diminish. This phenomenon reveals the trade-off between computational efficiency and task performance. And our pruning strategy is an approach that allows users to customize the number of branches based on their specific needs and available computational resources. **Q2:** Computational efficiency and limitation. We acknowledge the computational demands of task selection involving multiple auxiliary tasks. Indeed, handling such complexity is a challenge that warrants attention. We view the integration of previous task grouping methods [1-3] with our auxiliary task learning approach as a promising avenue for further research. By combining task selection techniques with our method, we envision the potential to alleviate the computational burden while still harnessing the power of auxiliary tasks for improved auxiliary-task learning performance. [1] *Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn. Efficiently identifying task groupings for multi-task learning. In NeurIPS, 2021.* [2] *Trevor Standley, Amir Zamir, Dawn Chen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? In ICML, 2020.* [3] *Amir Roshan Zamir, Alexander Sax, William B. Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In CVPR, 2018.*
Summary: The paper tackles the problem of learning multiple tasks together which is known to lead to "task inteference" or "negative transfer" issues. This is usually tackled by automatically scaling the task weights or gradients based on training statistics (e.g. GradNorm or uncertainty weighing of losses). In particular, the paper studies an asymetric version of the problem, Auxiliary Task Learning (**ATL**) where one task may not be important at inference but can greatly improve performance when jointly trained with the main target task. The paper first explore potential causes of task interference under the lens of train/target distribution shifts. Then, the paper proposes **ForkMerge**, a novel optimization method to avoid task interference: For each update of ForkMerge, first the current parameters are duplicated. Then, the weights are updated separately; one on the target task, while the other is jointly trained for the task and auxiliary tasks. Then, the optimal task weights are found by finding the linear combination of these two sets of weights that maximize validation accuracy (**Equation 12**). The algorithm can be extended to multiple auxiliary tasks which requires an additional separate optimization branch for each new auxiliary task. The method is then evaluated on the NYU dataset (3 tasks) and DomainNet (6 domains) and compared to previous MTL and ATL approaches Strengths: - I liked the analysis section of the paper and it contains interesting insights on the problem of task interference: For instance, gradient conflicts is often named as a cause of negative transfer but its effect/strength is rarely actually measured in practice. Although it would have been interesting to extend the analysis to more diverse scenarios. - I also like the insights that task weighing methods should take generalization into account, which is captured in the proposed algorithm by finding the optimal task weights via evaluation on the validation set - The paper presents extensive experiments on two classical multitask/multidomain benchmarks, which are further completed in the supplementary material (e.g. additional backbone) Weaknesses: - **Training cost**: the paper should discuss the training cost in terms of memory efficiency more concretely/quantitatively: ForkMerge requires a separate set of parameters for each task. In addition these parameters are also updated independently hence the number of forward/backward passes will also increases with t he number of tasks. Finally, the cost of search for optimal $\Lambda$ in Equation 13 will also increase with the number of tasks. These costs may remain reasonable in the application scenario with only one auxiliary task, but it's not clear how practical ForkMerge becomes when dealing with more than two tasks like in Section 5.2 - **Finding 2 does not seem very significant**. Finding 2 states that negative transfer is likely to occur when the added auxiliary task increases the train/test distribution shift for the target task: This seems like a pretty classic statement from statistical machine learning theory: For instance generalization bounds for domain adaptation typically include a term measuring the discrepancy between the source / target domains. - **Chosen baselines might not all be a fair choice**: the multi-task optimization baselines (MGDA, GradNorm, PCGrad... etc) typically aim to optimize performance on **all tasks**, as opposed to auxiliary task learning where there is a clear bias towards the target tasks: For instance in ForkMerge, every branch includes the target task objective. For that reason, I do not think applying MTL methods "as-is" to the problem of ATL is the best baseline. A stronger baseline could be to finetune the task weights in the "Equal Weights" (**EW**) baseline for performance on the target task; this might be to costly for DomainNet but would be reasonable for NYU as there are only two auxilair ytasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - **Suggestion of writing**: I feel like the paper sometimes overuse abbreviations. For instance only mentioning the tasks **P** and **Q** for DomainNet (Painting and Quickdraw domains ?) assumes the reader is very familiar with the dataset. In additional, there are many abbreviations introduced in the paper which often make the text hard to read (for instance lines 154-159) - **Distribution shift and hypothesis shift**: I did not understand the point that *lines 213-221* try to convey and the difference between Equations 12 and 13 (outside of renaming $K$ to $B$ and allowing $\Delta_t$ time steps instead of only 1). Similarly, I'm not sure what insight **Figure 6** tries to convey. More specifically: the task weighting $\lambda$ is initially defined at the loss level (Equation 1); However, it seems that in later sections $\lambda$ is rephrased as a parameter mixing the distribution of the target and auxiliary tasks; while there are links between these two views, I found that switching between the two interpretations in the text a bit confusing. It is also not clear what it means for multi-task applications: There is only one input distribution (the input images are the same for all tasks) so which distribution shift are we referring to ? - From the appendix **C.3**, it seems that the test set of DomainNet was split into two parts: one for validation, and the other one for test performance. I think this should be clearly stated in the main paper directly as it means the reported results are not comparable to other DomainNet works Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no dedicated limitations section but the paper discusses the limitation of dedicating independent optimization branches for every auxiliary task (e.g. branch pruning to avoid handling too many branches in the DomainNet) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer pM6c for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** Concern on the training cost. Please refer to $\text{question 2 (Q2)}$ of our global rebuttal. **Q2:** Contribution of Finding 2. It is worth emphasizing that our contribution lies in providing a new perspective, that is, to consider the problem of auxiliary task learning from the standpoint of model generalization, and to offer a quantifiable metric in this context. Indeed, traditional measures of distribution distance, such as the Maximum Mean Discrepancy [1] and $\mathcal{H}\Delta\mathcal{H}$-Divergence [2], have been extensively employed in well-established problems like domain adaptation. However, their direct application to the realm of auxiliary task learning encounters specific challenges. The primary challenge stems from the original definition of Maximum Mean Discrepancy and $\mathcal{H}\Delta\mathcal{H}$-Divergence, which are based on marginal distributions. To overcome this limitation, we introduce an innovative approach by establishing distribution shift considerations in the output space, leading to $\text{Definition 3.5}$. [1] *Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch olkopf, and Alexander Smola. A kernel two-sample test. In JMLR, 2012.* [2] *Ben-David, S., Blitzer, J., Crammer, K., and Pereira, F. Analysis of representations for domain adaptation. In NeurIPS, 2007.* **Q3:** Comparison with stronger baseline. We follow the reviewer's suggestion and enhance the performance of baselines MGDA, GradNorm, and PCGrad by fine-tuning the model obtained from multitask learning to each task. $\text{Table 4 of our global rebuttal pdf}$ presents the results on the NYUv2 dataset, where ForkMerge consistently outperforms improved baselines. **Q4:** Suggestion of writing. Many thanks to the reviewer's feedback. As revision is not supported in the rebuttal stage, we will update our camera-ready version as follows: - We will reduce the use of abbreviations. For instance, in the analysis part ($\text{Section 3}$), we will directly use the full concepts such as "Transfer Gain," "Weak Negative Transfer," "Strong Negative Transfer," and "Confidence Score Discrepancy" instead of their abbreviations. - Besides, we will include a table that explains the meaning of all abbreviations used in the paper. **Q5:** Confusion on the distribution shift and hypothesis shift. **Explanation about $\lambda$.** In our paper, $\lambda$ serves a dual role—it acts as both the task weighting parameter at the loss level and as a coefficient governing the mixture of distributions associated with different tasks. - As discussed in $\text{Section 3.2}$, adjusting the task weighting parameter λ leads to a change in the data distribution that the model is adapting to. This occurs due to the fact that varying λ influences the emphasis placed on different tasks during the learning process. We formally define the interpolated distribution in $\text{Equation 3}$. - The equivalence arises from the fact that the model trained on the interpolated distribution $\mathcal{T}\_\text{inter}$ is essentially the same as the model obtained through auxiliary-task learning across the distributions $\mathcal{T}\_\text{tgt}$ and $\mathcal{T}\_\text{aux}$. **Explanation about distribution.** We emphasize that when referring to distribution, we are specifically addressing joint distribution. This is crucial, as while auxiliary tasks might share the same input space, their output spaces invariably differ due to the inherent nature of diverse tasks. **Explanation about Figure 6.** - The mixture of hypotheses is different from the mixture of distributions. To maintain this distinction, we consistently employ the symbol $\lambda$ to represent the mixture weights of distributions, while employing $\Lambda$ to denote the mixture weights of model hypotheses. - Regarding $\text{Figure 6}$, it intends to convey the similarity in outcomes between the mixture of model hypotheses and the mixture of distributions. This alignment in behavior enables us to assert that the mixture of hypotheses is an approximation of the mixture of distributions. **The difference between Equations 12 and 13.** $\text{Equation 13}$ represents a generalized form of $\text{Equation 12}$, and its generality can be attributed to three key aspects: - **Customizable Branching**: Unlike $\text{Equation 12}$ which involves a fixed number of tasks, $\text{Equation 13}$ allows for a more flexible setting with a user-defined branching number $B$, which allows for scenarios where pruning or modifying the original candidate branches is desired. - **Enhanced Flexibility in Candidate Branches**: Additionally, $\text{Equation 13}$ offers a broader perspective on candidate branches. It's not strictly constrained to target-auxiliary task pairs for optimization. Instead, the formulation is open to incorporating any desired custom design of candidate branches, accommodating the integration of prior knowledge or domain-specific insights. - **Extended Time for Lambda Estimation**: Importantly, $\text{Equation 13}$ considers the changes in estimating the optimal lambda ($\lambda$) by extending the time for estimation. We have noticed that methods like Auto-$\lambda$, which try to estimate $\lambda$ within a single gradient update, often face challenges due to estimation fluctuations. ForkMerge addresses this by using a longer time for $\lambda$ estimation, which reduces the negative impact of estimation errors on model parameters and makes the algorithm more robust. **Q5**: Split of DomainNet. Thanks for the feedback. In our camera-ready version, we will explicitly highlight this split difference in the experimental section. --- Rebuttal Comment 1.1: Title: thanks for your reply Comment: Hello authors, thanks for your reply and clarifications! Regarding the difference between distribution and hypothesis shift, I still think the way it is described in the paper introduces more confusion than necessary (which ties in with the weakness I listed about finding 2) although it is more clear now thanks to your reply. I would like to keep my original rating of (6). **Minor comments, mainly about writing clarity** **a. About the difference between $\lambda$ and $\Lambda$** In the response, you explain that *"we consistently employ the symbol to represent the mixture weights of distributions, while employing to denote the mixture weights of model hypotheses"*; however $\lambda$ is initially defined as the weights in the task losses: Building on this, Equation 11 only defines the vector $\Lambda$ to be essentially equal to $\lambda$; similarly Equation (7) and (8) both define mixture of weights using $\lambda$ and not $\Lambda$. Going further, it seems that instead, the intuition of $\lambda$ as distribution mixture coefficient is the one that should be redefined ? For instance, in lines 216, you write *Thus, we transform the problem of mixture distribution into that of mixture hypothesis*. However the previous paragraph(s) only show how yo transform the problem of mixture of losses to that of mixture hypothesis (equations 9 to 12): As a reader, I found that the link between "mixture of losses" and "mixture of distribution" should be better formalized/motivated; it's briefly mentioned in Section 3.2 (equation 3) but does not really tie in with the rest of the paper and the method of Section 4 **b. regarding Equation 13**, I understand the points you mentioned, but I think the writing should be rephrased to motivate Equation (13) better: For instance in the paper, $B$ is only introduced as "number of candidate branches" while in your response, you clearly mention its use for pruning/efficiency. The **extended time for lambda estimation** is not mentioned at all when introducing (13) (but it makes sense from a practical perspective). And similarly, the **enhanced flexibility** wrt to Equation 12 is also not mentioned, and is not apparent from the equation itself. --- Reply to Comment 1.1.1: Title: Thanks for the Reviewer's Reply Comment: We'd like to thank Reviewer pM6c again for providing an impressively insightful pre-rebuttal review, which has enabled us to make an effective response. We'd also thank you for carefully judging our feedback and acknowledging our work in the final review. Following your suggestion on writing clarity, in the next version, we will delve deeper into the relationship between the concepts of "mixture of losses" and "mixture of distributions." We will also provide more explanation on $\text{Equation 13}$ to highlight its distinctions and advantages in comparison to $\text{Equation 12}$.
Summary: The authors conduct an analysis of negative transfer in auxiliary task learning, finding that gradient conflicts are not necessarily tied to negative transfer, but that auxiliary tasks that induce large distribution shifts from the new training distribution to the test distribution tend to cause negative transfer. The authors then propose ForkMerge, which repeatedly forks a model into branches trained on just the target task and trained on both the target task and auxiliary task(s), and then uses target task validation set performance to determine how to merge the forked models. Strengths: 1. The authors share an interesting and thoughtful analysis, investigating both gradient conflicts and induced distribution shifts as potential causes of negative transfer and concluding that gradient conflicts are not necessarily tied to negative transfer. 2. The ForkMerge approach that the authors introduce is intuitive and simple, and the results are consistent across a variety of tasks. 3. Well-motivated and significant problem Weaknesses: 1. In general, some critical details are omitted altogether in the main text, sometimes with no proper reference to the Appendix when details are missing. In particular: - In Section 3, please spell out Painting and Quickdraw (and later, Real in Section 5) and describe DomainNet when first introducing the tasks. If it needs to be brief, the authors can at the very least include a pointer to Appendix C for dataset and task details. - I also believe it is worth mentioning "We use ResNet-18 [8] pre-trained on ImageNet [3] for all experiments." in the main text rather than only noting it in the appendix. 2. I would have liked to see more overlap between works cited in the Experiments section and the works discussed in Related Work, especially discussion about and comparison of adaptive auxiliary- and multi-task learning strategies and settings. In particular, [36] and the original meta-learning paper [13]. A couple additional related works the authors may consider adding: https://arxiv.org/abs/2205.14082 and https://arxiv.org/abs/2212.01378 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can the authors describe in more detail the computational overhead of ForkMerge, especially wrt time and memory, and in relation to the existing approaches from Section 5? I am most curious about the comparison to meta-learning. 2. Do the authors predict that these results and the practicality of ForkMerge are limited to settings where one starts with a pretrained initialization? 3. re: definition 3.5, are there alternative measures of distribution shift that the authors have considered? I am personally somewhat skeptical of confidence scores as a basis for measuring distribution shift given that many models are poorly calibrated in the first place. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors test their method using only one model, ResNet-18 pre-trained on ImageNet, on image tasks. I won't push for additional experiments, but I would like to see the authors discuss what assumptions from this paper's experiments are expected to be critical for generalization of findings to other settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer uS5H for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** Some critical details are omitted in the main text. Thank you for the feedback. Below, we outline our intended revisions: - We will make sure to spell out acronyms such as "Painting", "Quickdraw", and "Real" in the main text when first introducing them. Moreover, we will provide comprehensive descriptions of these tasks and the DomainNet dataset in the main text. - In our revised version, we will include a statement in the main text to mention that we utilize ResNet-18 pre-trained on ImageNet. **Q2**: More related work. We have carefully checked the recommended papers and agree that they are closely related to our research. In the camera-ready version, we will add the following discussion in the related work part. AANG [1] formulates a novel searching space of auxiliary tasks and adopts the meta-learning technique, which prioritizes target task generalization, to learn singlle-step task weightings. This parallel finding highlights the importance of the target task generalization and we further introduce the multi-step task weightings to reduce the estimation uncertainty. Another parallel method, ColD Fusion [2], explores collaborative multitask learning and proposes to fuse each contributor's parameter to construct a shared model. In this paper, we further take into account the diversity of tasks and the intricacies of task relationships and derive a method for combining model parameters from the weights of task combinations. [1] *Dery, Lucio M., et al. AANG: Automating Auxiliary Learning. In ICLR, 2023.* [2] *Don-Yehiya, Shachar, et al. Cold fusion: Collaborative descent for distributed multitask finetuning. arXiv preprint.* **Q3**: Discussion on the computation cost. Please refer to $\text{question 2 (Q2)}$ of our global rebuttal. **Q4**: Is ForkMerge limited to settings where one starts with a pretrained initialization? The practicality of ForkMerge is not confined to settings with a pretrained initialization. In our experimental evaluation, we also employed the AliExpress, CIFAR-10, and SVHN datasets without leveraging any pretraining, as reported in $\text{Table 5}$ and $\text{Table 6}$, respectively. However, we did choose to use pretrained models in the case of DomainNet and NYUv2. This decision was rooted in two key factors. First, we aimed to ensure a fair comparison with prior works. Second, while not a universal requirement, the utilization of pretrained models is a prevalent setting. In numerous real-world applications, the absence of pretraining can significantly degrade performance. **Q5**: Are there alternative measures of distribution shift that the authors have considered? In the field of machine learning, various measures of distribution shift have been introduced, such as the Maximum Mean Discrepancy [3] and $\mathcal{H}\Delta\mathcal{H}$-Divergence [4], which have found applications in classic problems like domain adaptation. However, extending these measures to the context of auxiliary task learning presents certain challenges. The primary hurdle arises from the fact that Maximum Mean Discrepancy and $\mathcal{H}\Delta\mathcal{H}$-Divergence are originally defined over marginal distributions. This poses difficulties when dealing with auxiliary task learning scenarios where distinct tasks share a common feature space, yet exhibit variations in their output spaces. To address this limitation, we define the distribution shift in the output space, leading to the formulation of Confidence Score Discrepancy. While it is acknowledged that a deep learning model's predictive confidence on a specific data point might not always align with correctness, we posit that the expected confidence over a distribution captures certain characteristics of that distribution. While model calibration remains a concern, our definition places more emphasis on the relative magnitude of confidence. For instance, if the expected confidence for a test distribution $D_1$ is lower than that of another test distribution $D_2$, we interpret this as indicating a larger distance between the test distribution $D_1$ and the training distribution. [3] *Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch olkopf, and Alexander Smola. A kernel two-sample test. In JMLR, 2012.* [4] *Ben-David, S., Blitzer, J., Crammer, K., and Pereira, F. Analysis of representations for domain adaptation. In NeurIPS, 2007.* **Q6:** Assumption of findings. - $\text{Finding 1 (Section 3.1)}$: We believe that the phenomenon we observed, where negative transfer is not solely attributed to gradient conflicts and vice versa, holds general relevance. This observation is supported by numerous instances, such as L2 regularization mentioned in the paper. - $\text{Finding 2 (Section 3.2)}$: The assertion that negative transfer is more likely when auxiliary tasks increase distribution shift between training and test data for the target task aligns with the principles of typical supervised learning. The validation of this assertion indeed presents certain challenges. To address this concern, we introduced the concept of Confidence Score Discrepancy, which quantifies the joint distribution distance in auxiliary task learning. It is important to note that the application of this concept assumes the presence of a confidence measure within the model. In cases such as regression tasks like the NYUv2 dataset, where this confidence measure is not as well-defined, its applicability may be limited. This is one of the reasons why we chose to conduct our analysis experiments on the DomainNet dataset, as it offers a suitable environment for investigating these intricacies. In summary, we hypothesize that while certain aspects may have limitations in diverse scenarios, the principles underlying them remain widely applicable. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and additional context! Updating my score to 7 assuming the authors will incorporate the discussions and clarifications into the camera ready if accepted. --- Reply to Comment 1.1.1: Title: Thanks for the Reviewer's Reply Comment: We'd like to thank Reviewer uS5H again for providing an impressively insightful pre-rebuttal review, which has enabled us to make an effective response. We'd also thank you for carefully judging our feedback and acknowledging our work in the final review. **We will guarantee to incorporate the discussions and clarifications into the next version.**
Summary: Auxiliary-Task-Learning (ATL) has been studied from the perspective of optimization, which aims to improve the performance of the target task by leveraging similar tasks. However, ATL can sometimes suffer from negative transfer, where the performance of the target task actually decreases when auxiliary tasks are added. In this paper, the authors take a more holistic perspective by considering both optimization and generalization. They propose a new method called ForkMerge that is able to resolve negative transfer and improve the performance of the target task. Strengths: * ForkMerge is a simple and practical algorithm that demonstrates good empirical observation. * Many analytic figures help readers to understand the negative transfer better although they are limited to DomainNet dataset. Weaknesses: * There seems to be some logical jump between the two observations listed in Section 3 and the proposed algorithm in Section 4. It is quite unclear how they are connected directly, and it’ll be great to have some theoretical formulation for the connection (at least in high-level). For example: why dynamically adjusting \lambda is superior to the previous method? How does this address the generalization problem? I think this limits the contribution of this work. * ForkMerge essentially optimizes a fork of the given model and trains them separately multiple times; hence, it does require significantly larger compute during the model training. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * As discussed in the weaknesses section, the connection between the two main observations to the ForkMerge is not strong. Can you provide a high level theoretical analysis of how ForkMerge can make better generalization not just based on the empirical analysis? * Forking a large model is very expensive. Did authors consider a simple ensemble based approach which should be also similar in their compute (not the actual model weight merging rather than prediction level)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: I don’t see the negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer QCm1 for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** Logical Connection Enhancement. Please refer to $\text{question 1 (Q1)}$ of our global rebuttal. **Q2**: Why ForkMerge is superior to the previous method? **Enhanced Generalization.** While methods like GCS adjusted $\lambda$ based on the training data at each step, this primarily tackled optimization-level challenges without always guaranteeing improved generalization. The dynamic $\lambda$ adaptation in ForkMerge inherently captures the importance of generalization by directly considering the target validation performance during each merge step. **Mitigation of Estimation Uncertainty.** Approaches such as Auto-$\lambda$, which attempted to estimate $\lambda$ from a single gradient update, face challenges due to estimation fluctuations. The negative impact of inaccurately estimated $\lambda$ values on model parameters might be particularly noticeable. ForkMerge addresses this by using a longer time for $\lambda$ estimation, which reduces the negative impact of estimation errors on model parameters. The introduction of branch merging within our framework further strengthens the training process against inaccuracies in $\lambda$ estimation, making the algorithm more robust. **Efficient Computation.** The adoption of techniques such as Grid Search, which entailed exhaustive training and $\lambda$ tuning, led to escalating computational demands as the number of auxiliary tasks increased, resulting in exponential complexity. In contrast, ForkMerge dynamically adjusts $\lambda$ during each merging step, significantly reducing the complexity associated with exhaustive search. Furthermore, our approach mitigates the problem of suboptimal solutions that can arise due to fixed auxiliary task weights, as indicated in $\text{Figure 3 in Appendix D.1}$, and achieves a better trade-off between performance and computation cost. We hope these insights provide a clearer perspective on the advantages of ForkMerge's dynamic $\lambda$ adjustment mechanism and its superiority over previous methods. **Q3:** Concern on larger compute during the model training. Please refer to $\text{question 2 (Q2)}$ of our global rebuttal. **Q4**: Consider an ensemble-based approach focusing on merging the prediction? In this response, we'd like to highlight the distinct advantages of our ForkMerge algorithm in comparison to the suggested ensemble approach, while addressing the aspect of computational requirements. While ensemble methods indeed provide a means to combine multiple models, especially at the prediction level, it's essential to note that the computational costs associated with ensemble techniques during the testing phase can be substantial. Ensemble methods require making predictions using each individual model and then aggregating these predictions, leading to increased inference time and resource utilization. Thus, we have explored an alternative: conducting ensemble learning with multiple models during the training phase and distilling them into the target task. We use the term Pred-ensemble to represent this method. The experimental results on the DomainNet dataset are presented in $\text{Table 3 of our global rebuttal pdf}$. The Pred-ensemble method effectively improves the performance on $4$ of $6$ datasets, yet still lags behind our proposed ForkMerge on all tasks. --- Rebuttal Comment 1.1: Comment: Thanks authors for the detailed response. I have looked at the rebuttal (for my review and also the global), and the rebuttal does not fully address my concerns; hence I'll be keeping my current score. To be more specific, the logical connection still seems to be a bit hazy to me. I looked at Global Rebuttal #1, but I am not fully convinced. For example, "hyperparameter to be properly tuned -> let's dynamic change" is not well connected. Why is the model so sensitive with the hyperparameters? What model property is inducing this? Can we make modifications for the modeling assumptions so that this can be handled differently? These are not discussed well. --- Reply to Comment 1.1.1: Title: Replying to Reviewer Comment: Thanks again for your dedication to reviewing our paper. We will provide additional clarification in this response. Firstly, it's important to clarify that **ForkMerge is inspired by, but not a direct extension of, the findings presented in $\text{Section 3}$.** ForkMerge is rooted in two fundamental principles of machine learning: the linear combination of losses in auxiliary task learning (as detailed in $\text{Equation 1}$) and the utilization of stochastic gradient descent for optimization (as outlined in $\text{Equation 5}$). Through these foundational principles, we naturally arrive at the concept of linearly combining model parameters (elaborated upon in $\text{Appendix A}$). Hence, the insights in $\text{Section 3}$ offer an intuitive explanation for the approach in $\text{Section 4}$, and even in the absence of $\text{Section 3}$, ForkMerge could still be derived through theoretical reasoning. **Q1:** Logical connection between our findings and proposed method. - $\text{Section 3.1}$ revisits the issue of gradient conflict and concludes that it's not necessarily correlated with negative transfer. **Thus, unlike prior methods, our approach does not initiate from the standpoint of gradient conflict.** - $\text{Section 3.2}$ underscores the importance of considering generalization. Drawing from these two points, ForkMerge leverages target task validation error for $\lambda$ selection and deduces the form of model parameter interpolation, as illustrated in $\text{Equation 7}$. Building upon these findings, our approach further: - Extends to dynamically adjusting $\lambda$ due to the evolving importance of different tasks during training (refer to $\text{Appendix D.1 Figure 3}$). - Employs longer time intervals for $\lambda$ estimation to reduce noise during the estimation process. --- Reply to Comment 1.1.2: Title: Replying to Reviewer Comment: **Q2:** Why is the model so sensitive to the hyperparameters? What model property is inducing this? The sensitivity of the model to the weights assigned to different tasks is an inherent aspect of auxiliary-task learning. These task weights serve as crucial hyperparameters that directly influence the optimization objectives of the model. In the realm of auxiliary task learning and multi-task learning, numerous studies focus on tuning the weights of different tasks to attain optimal model performance. For instance: - **Uncertainty Weighting (UW)** [1] employs task uncertainties to weight the loss functions, effectively balancing the significance of various tasks. - **Dynamic Weighted Averaging (DWA)** [2] utilizes the decreased rate of task losses over time to weight the loss function dynamically. - **Gradient-Cosine Similarity (GCS)** [3] and **Auto-$\lambda$** [4] estimate dynamic task weights within a single iteration, using gradient cosine similarity and finite difference approximation, respectively. Furthermore, $\text{Section 3}$ of our paper provides a comprehensive exploration of the considerable impact that task weights have on the model from different perspectives: - **Multitask Optimization Perspective:** The task weights govern the optimization goal. Setting the auxiliary task weight to $0$ disregards the auxiliary task, while an infinitely large weight would cause the auxiliary task to dominate the primary task. This establishes the existence of an optimal equilibrium. - **Distribution Perspective**: The task weights influence the training distribution, as illustrated in $\text{Figure 3}$. A proper selection of task weights can better fit the testing distribution of the target task. [1] *Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, 2018.* [2] *Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In CVPR, 2019.* [3] *Yunshu Du, Wojciech M Czarnecki, Siddhant M Jayakumar, Mehrdad Farajtabar, Razvan Pascanu, and Balaji Lakshminarayanan. Adapting auxiliary losses using gradient similarity. arXiv preprint arXiv:1812.02224, 2018.* [4] *Shikun Liu, Stephen James, Andrew J Davison, and Edward Johns. Auto-lambda: Disentangling dynamic task relationships. In TMLR, 2022.* **Q3:** Can we make modifications to the modeling assumptions so that this can be handled differently? The model's sensitivity to task weights stems from the fundamental intricacies of auxiliary-task learning, where careful consideration is required to strike a balance between mitigating overfitting and managing potential negative transfer. An intuitive solution could involve employing larger models to address this challenge, yet this approach brings forth its own set of complexities. Apart from the additional computational costs and memory overhead that come with larger models, a critical concern arises in the form of increased susceptibility to overfitting. Thus, a delicate trade-off between mitigating overfitting, a challenge ingrained in single-task learning, and the introduction of potential negative transfer due to auxiliary tasks must be meticulously weighed. Indeed, during the rebuttal phase, we delved into the prospect of employing larger models to examine this hypothesis. Specifically, we substituted the backbone network with ViT-Base [5], which has been pretrained on ImageNet 21K. We then conducted experiments on the DomainNet dataset, as outlined in $\text{Section 5.2}$. The results, as presented in $\text{Table 2 of our global rebuttal pdf}$, yield insightful observations: - ViT-Base demonstrates enhanced average accuracy through the equal weighting (EW) method, as compared to single-task learning. We posit that this improvement can be attributed to the data-hungry nature of vision transformers [5], wherein the advantages of auxiliary tasks in alleviating overfitting could potentially outweigh any interference introduced by the auxiliary tasks themselves. - ForkMerge consistently outperforms the comparison methods across all tasks, with an average accuracy of $73.3$ for ForkMerge, as opposed to $70.0$ for Post-train. This robust performance across various network architectures further substantiates the effectiveness of the ForkMerge approach. [5] *Dosovitskiy, Alexey et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR, 2021.*
Rebuttal 1: Rebuttal: We would like to sincerely thank all the reviewers for providing insightful reviews and valuable comments. Your reviews are of great importance to us in improving the quality of this work. **In this global rebuttal, we aim to clarify the common questions from reviewers, and we have responded to each reviewer with a separate response for other questions. The full results of additional experiments are attached in the one-page pdf.** **Q1:** How do the new findings motivate the proposed ForkMerge? Below we outline the motivations behind the design of ForkMerge in light of our new findings. - **[Finding]** $\text{Section 3.1}$ reveals that the presence of gradient conflict does not necessarily lead to negative transfer, as long as the hyperparameter $\lambda$ is appropriately tuned. Additionally, $\text{Section 3.2}$ emphasizes the importance of considering generalization to mitigate negative transfer effectively. **[Algorithm Design]** Based on these findings, we opt to dynamically adjust the hyperparameter $\lambda$ according to the target validation performance in ForkMerge ($\text{Section 4.1}$). - **[Finding]** $\text{Section 3}$ highlights that in scenarios where weak negative transfer (WNT) occurs, selecting an appropriate value for $\lambda$ can alleviate the problem. However, in cases of strong negative transfer (SNT), setting $\lambda$ to $0$ becomes necessary. **[Algorithm Design]** In each merging step of ForkMerge, we perform a search step to identify the optimal value of $\lambda$, which effectively mitigates weak negative transfer. Furthermore, for instances of strong negative transfer, ForkMerge is able to set $\lambda$ to 0 to prevent negative transfer ($\text{Section 4.1}$). Additionally, we have introduced a pruning mechanism to remove SNT forking branches, thus reducing the computation cost ($\text{Section 4.2}$). - **[Finding]** $\text{Section 3.2}$ indicates that negative transfer is likely to occur when the introduced auxiliary task enlarges the distribution shift between the training and test data for the target task. To address this issue, it is crucial to select auxiliary tasks that decrease the distribution shift between training and test data for the target task. **[Algorithm Design]** To address the distribution shift problem, the general form of ForkMerge constructs mixture distributions that comprise diverse data shifts relative to the target distribution. Subsequently, models trained on these different distributions are combined dynamically to approach the optimal parameters ($\text{Section 4.2}$). **Q2:** Concern about the computation cost and trade-off between efficiency and accuracy. **Clarification on the Computation Cost.** Firstly, we have developed several techniques to reduce computation cost. Below, we provide a detailed explanation: - **Pruning Strategy:** As introduced in $\text{Section 4.2}$, we can prune the forking branches with $\Lambda_k=0$ and only keep the branches with the largest $K'<K$ values in $\Lambda$ after the early merge step, where $K$ represents the total number of tasks. - To illustrate the effectiveness of the pruning strategy, we present results in $\text{Section 5.2}$ on Auxiliary-Domain Image Recognition and CTR and CTCVR Prediction tasks. For instance, on the CTR and CTCVR Prediction task, we initially construct up to $8$ branches with different task weights, but after the first merge step, we prune them to $3$ branches, achieving considerable computational savings. - **Greedy Strategy in the Merge Step:** In $\text{Algorithm 2 of Appendix A.2}$, we introduce a greedy strategy during the merge step. This modification reduces the computation complexity from exponential to linear complexity when searching for optimal task weighting. - **Validation Set Sampling:** As mentioned in $\text{Appendix A.1}$, the costs associated with estimating validation performance $\hat{\mathcal{P}}$ in the search step are usually negligible. However, when the validation set size is relatively large, we can resort to sampling to reduce the computational cost further. Further, we have conducted analysis of the computation cost in $\text{Appendix D.2}$: - Although only one model is optimized in most previous auxiliary-task learning methods, their computational costs are not necessarily $\mathcal{O}(1)$. For example, gradient balancing methods require computing gradients of each task, thus leading to $\mathcal{O}(K)$ complexity. In addition, calculating the inner product or norm of the gradients will bring a calculation cost proportional to the number of network parameters. - To support our claims, we access the actual training time across methods on NYUv2. As depicted in $\text{Figure 4 of Appendix D.2}$, ForkMerge does not require more training time than other auxiliary task learning methods, including GCS, OL_AUX, ARML, and Auto-$\lambda$. **Memory Utilization.** In terms of memory usage, it's important to note that the optimization of each branch within ForkMerge is entirely independent. This enables us to load only the model parameters corresponding to the particular branch being trained at any given time. Consequently, the storage requirements are comparable to those of single task learning (STL), resulting in minimal memory overhead. **Trade off between Efficiency and Accuracy.** In $\text{Section 5.2}$, we provide an analysis on DomainNet to explore the trade-off between efficiency and accuracy. Results in $\text{Table 4}$ reveal that as the number of branches increases, the gain by auxiliary tasks will enlarge while the gain brought by each branch will reduce. In this paper, we propose to address this trade-off through the pruning strategy, which allows users to customize the number of branches based on their specific needs and available computational resources. Pdf: /pdf/9e33142652e758b6d9dadfc01e2f8f0c4281a8d5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: To fully leverage the knowledge from auxiliary tasks and mitigate negative transfer issues, this paper introduces ForkMerge, which automatically searches for varying task weights for auxiliary tasks by minimizing target validation errors. ForkMerge is evaluated under various settings, including multi-task learning, multi-domain learning, and semi-supervised learning. The results demonstrate that it outperforms existing methods and proves to be effective. Strengths: This paper demonstrates a well-organized structure. The authors have conducted experiments using diverse datasets and tasks, resulting in promising outcomes. Furthermore, the paper provides theoretical analysis of each component, offering valuable insights into their impact on performance. Overall, the clear structure and in-depth analyses make it an engaging and compelling read. Weaknesses: The paper lacks clear explanations for certain sentences, training details, and figure information. For example, there is a lack of clarity regarding the data division strategy for each branch and the effectiveness of the learned weights. Additional elaboration and detailed analysis in these areas would enhance the overall understanding and impact of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: W1. The paper does not provide a clear explanation of how the method filters out harmful parameter updates to mitigate negative transfer after merging and synchronizing the parameters of each branch. W2. Figure 3 would benefit from improved clarity, using a distinct color or different marks would be helpful. Additionally, why does the number of auxiliary task data points increase with the increasing $\lambda$? W3. Lacks clarity regarding the data division strategy for each branch. It is not explicitly explained whether each branch is trained with a part of the data or full data. Moreover, the impact of increasing the number of branches on computational cost and its trade-off with efficiency and accuracy is not thoroughly addressed in the paper. W4. I am curious about the comparison between the learned weights by ForkMerge and the optimal weights. For instance, in Figure 3 (b), for WNT tasks, do the learned weights resemble the optimal weights shown in the figure? W5. It would be valuable to investigate and illustrate the trajectory of the learned weights over the course of training to understand their dynamics and convergence patterns. To gain a better understanding of the effectiveness of the ForkMerge approach, it would be insightful to compare the performance of fixed learned weights by ForkMerge with dynamically learned weights during training. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations could be more clarified in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer G3Ms for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** Clarification on the data division strategy for each branch. Depending on the characteristics of auxiliary task learning scenario, there are two circumstances. - In the case of the NYUv2 dataset, multiple tasks share the same input, but their outputs are different. In this setup, each branch in the ForkMerge algorithm has the same input data, which includes the entire dataset. The distinction between different branches solely lies in the task weighting. - In contrast to the NYUv2 scenario, for datasets like DomainNet, different tasks have both different inputs and outputs. In these cases, for each branch, if the task weighting of a specific task is set to $0$, the data from that particular task will not be used for training the corresponding branch. We appreciate the reviewer's feedback and will add clarification on the data division strategy for each branch in our camera-ready version. **Q2:** The trajectory of the learned weights over the course of training. We have visualized the trajectory of the learned weights in $\text{Figure 3 of Appendix D.1}$, which indicates the relative ratio of each forking branch is dynamic and varies from task to task. **Q3:** Clarification on the effectiveness of the learned weights. Comparison between the learned weights by ForkMerge and the optimal weights. **Effectiveness of the Learned Weights**. We would like to clarify that the learned weights of ForkMerge are a sequence of task weightings rather than a fixed one. - At each merging step, ForkMerge will search for an optimal task weighting based on the current learning status of all branches. - $\text{Figure 3 in Appendix D.1}$ visually demonstrates the dynamic behavior of task weighting during the training process in ForkMerge. As shown in the figure, there is no evidence to suggest that the task weighting will converge to a fixed value during training. Instead, the algorithm continuously adjusts the task weighting. Regarding the effectiveness of the learned sequence of task weighting, our experiments in $\text{Section 5}$ demonstrate that ForkMerge consistently outperforms existing methods on various benchmarks. **Comparison with Optimal Weights.** We are not pretty sure about the "optimal weights" mentioned in your question and we understand the possible reference to static weights obtained from grid search. - We have provided a comparison between ForkMerge and grid searching $\lambda$ on the NYUv2 dataset, as presented in $\text{Appendix D.4}$. In $\text{Figure 6 of Appendix D.4}$, we visualize the performance comparison of all methods with grid search. For grid search, there are a total of $27$ weighting configurations. It can be observed that ForkMerge achieves substantial improvement over grid search on all tasks. If there is any confusion or misunderstanding regarding the "optimal weights" mentioned in your question, we would be glad to answer any further questions and provide additional clarification. **Q4:** Comparison with fixed learned weights by ForkMerge. As clarified above, defining fixed learned weights for ForkMerge might not be straightforward and may not hold a meaningful interpretation. It is important to note that ForkMerge naturally generates a dynamic sequence of task weightings, continuously adapting during the training process. And our experiments in $\text{Appendix D.4}$ provide evidence that this dynamic task weighting mechanism outperforms grid searching static weights. **Q5:** Clarification on how the method filters out harmful parameter updates to mitigate negative transfer. As outlined in $\text{Section 4}$ of our paper, ForkMerge operates through an iterative process involving fork and merge steps: - During the fork step, each branch in ForkMerge is independently trained. It is in this step that harmful parameter updates might occur, potentially compromising the performance on the target task. - In the merge step, ForkMerge searches for the optimal task weighting combination of different branches. This mechanism empowers ForkMerge to dynamically adjust the task weighting for each branch based on its contribution to the overall performance. As a result, ForkMerge can decrease the weighting of branches where negative transfer occurs. In extreme cases, it can set the weighting to $0$, ignoring harmful parameters. After merging, the newly obtained parameters are synchronized across all branches. By synchronizing the new parameters to all branches, ForkMerge ensures that the harmful parameter updates experienced during the fork step are effectively filtered out. The filtering process occurs post-merging, allowing each branch to benefit from the collective knowledge while mitigating the influence of negative transfer. **Q6:** Why does the number of auxiliary task data points increase with the increasing $\lambda$ in Figure 3? As discussed in $\text{Section 3.2}$, adjusting $\lambda$ will change the data distribution that the model is fitting. When the weighing hyper parameter of the auxiliary task increases, the effect of the auxiliary task on the interpolated distribution will also increase. As a result, to visualize the impact of $\lambda$ on the interpolated training distribution, we let the frequency of auxiliary task points be proportional to $\lambda$ (introduced in $\text{Appendix B.2}$). **Q7:** The impact of increasing the number of branches on computational cost and its trade-off with efficiency and accuracy is not thoroughly addressed in the paper. Please refer to $\text{question 2 (Q2)}$ of our global rebuttal.
Summary: This paper strives to mitigate negative transfer in auxiliary-task learning by optimizing the coefficients assigned to auxiliary tasks. By conducting an empirical investigation into the factors contributing to negative transfer, this paper reveals two interesting findings. Based on the findings, a new approach named ForkMerge is proposed to mitigate negative transfer and boost the performance of auxiliary-task learning. Extensive experiments demonstrate the effectiveness of the proposed approach. Strengths: +The problem is well-defined and well-motivated. +The findings seem interesting +The proposed approach is reasonable and well presented. Weaknesses: -It is not clear how the new findings motivate the proposed ForkMerge. -I'm curious about the performance when directly trying various lambdas (grid search) during multiple training sessions. Although the proposed method is more efficient, there is a concern about potential accuracy trade-offs. I seek to understand if the proposed approach sacrifices accuracy compared to the direct lambda variation method. -The simple post-train method leads to superior performance over other complex approaches, which makes me doubt the significance of research efforts in this area over the past years. This paper claims that the main drawback of post-train method is that it fails to consider the task relationship in the pre-training phase, and suffers from forgetting during fine-tuning. I’m wondering the performance of the post-train method if considering the task relationship in the pre-training phase and preventing forgetting during fine-tuning such as by distillation. In this scenario, if the post-train method manages to surpass the proposed ForMerge approach, considering ForMerge’s slight improvement in most tasks compared to Post-train. -The experimental evaluation is limited to small datasets, such as NYUv2 and CIFAR-10, and employs a relatively small network like ResNet-50 (DeepLabV3+). As a result, there is uncertainty regarding the efficacy of the proposed method on larger datasets and powerful networks like transformers, which already excel in single-target tasks. Consequently, assessing the true significance of the proposed method in the current era of deep learning becomes challenging. -Too may abbreviations make paper hard to follow in some paragraphs. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Not sure Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer yZ6C for providing insightful reviews and valuable comments. We have clarified the questions in the following response. **Q1:** It is not clear how the new findings motivate the proposed ForkMerge. Please refer to $\text{question 1 (Q1)}$ of our global rebuttal. **Q2:** Comparison with directly grid searching task weighting $\lambda$. In $\text{Figure 6 of Appendix D.4}$, we present the comparison of all methods with grid search. For grid search, there are a total of $27$ weighting configurations. Our observations are as follows: - Existing methods typically yield performance trade-off points that lie along the scalarization Pareto front. - ForkMerge produces results that are distant from the Pareto front, resulting in substantial performance gains over the grid search technique. - This improvement can be attributed to the fact that grid search tends to converge towards suboptimal solutions due to its reliance on fixed auxiliary task weights. Conversely, ForkMerge possesses the capability to dynamically adjust task weights continuously, as depicted in $\text{Figure 3 of Appendix D.1}$. **Q3:** Comparison with improved Post-train method. To address the reviewer's concern, we will first discuss the Post-train method and then present additional experimental results with an improved version of Post-train. **Discussion on Post-train Method.** - Post-train can be seen as a specific instance within ForkMerge's framework. In the first half of the training period, only branches with equal task weighting are employed, while in the second half of the period, only branches with single task weighting are used. - Our experimental results demonstrate the superiority of ForkMerge over Post-train across all benchmark tasks. Specifically, ForkMerge achieves substantial performance gains in CTR and CTCVR prediction tasks ($\text{Table 5}$: $+1.30\\%$ *v.s.* $+0.14\\%$) and the semi-supervised learning task ($\text{Table 6}$: $+46.3\\%$ *v.s.* $+30.4\\%$). - Lastly, when the performance of single task learning (STL) is worse than equal weighting (EW), the fixed post-training strategy may not yield performance gains. For example, when employing ViT-Base as the backbone network on the DomainNet dataset, where ViT is particularly data-hungry, the performance of the Post-train approach falls behind that of EW and ForkMerge. For comprehensive results, please consult $\text{Table 2 of our global rebuttal pdf}$. **Additional Experiments.** We enhance the Post-train method by adopting the Knowledge Distillation (KD) technique [1] to preserve the knowledge. $\text{Table 1 of our global rebuttal pdf}$ presents the results on the DomainNet dataset. As evident from the table, ForkMerge outperforms the improved Post-train method. [1] *Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NeurIPS Workshop, 2014.* **Q4:** Limitation on the scale of evaluation datasets. - We adopt the NYUv2 dataset in scene understanding task, CIFAR10, and SVHN datasets in semi-supervised learning task as they are widely used in the auxiliary task learning literature. By doing so, we can provide a more fair and meaningful comparison with prior methods. - Additionally, as shown in $\text{Section 5}$, we further conduct experiments on medium scale DomainNet dataset (about $0.6$M images) and large scale AliExpress dataset (over $100$M records). ForkMerge clearly outperforms existing methods on both datasets, affirming its efficacy across datasets of different magnitudes. **Q5:** Experiments with advanced architectures such as transformers. To address the reviewer's concern, we replace the backbone network with ViT-Base [2] pretrained on ImageNet 21K and repeat the experiments on DomainNet of $\text{Section 5.2}$. $\text{Table 2 of our global rebuttal pdf}$ presents the results and we have the following observations. - With ViT-Base, the equal weighting (EW) method achieves higher average accuracy over single task learning. We conjecture that this improvement can be attributed to the data-hungry nature of vision transformers [2]. In this case, the benefits of reducing overfitting through the use of auxiliary tasks may outweigh the potential task interference problem. - ForkMerge outperforms compared methods on all tasks (average accuracy: ForkMerge $73.3\\%$ *v.s.* Auto-$\lambda$ $71.5\\%$), validating its efficacy across different network architectures. [2] *Dosovitskiy, Alexey et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR, 2021.* **Q6:** Too many abbreviations make the paper hard to follow in some paragraphs. Many thanks to the reviewer's feedback. We will update our camera-ready version as follows: - We will reduce the use of abbreviations. - For instance, in the analysis part ($\text{Section 3}$), we will directly use the full concepts such as "Transfer Gain," "Weak Negative Transfer," "Strong Negative Transfer," and "Confidence Score Discrepancy" instead of their abbreviations. - Besides, we will include a table that explains the meaning of all abbreviations used in the paper. - We will provide more details in the main text rather than solely discussing these issues in the Appendix. - For instance, in the analysis part ($\text{Section 3}$), we will include a description of the DomainNet dataset, outline our dataset processing steps, and provide details about the network. - In the method part ($\text{Section 4}$), we will elaborate on the data division strategy for each branch, provide details about how we prune branches and efficiently merge different branches to reduce computation costs. Additionally, we will provide a discussion comparing $\text{Equation 12}$ and $\text{Equation 13}$ to clarify their difference. - In the experiment part ($\text{Section 5}$), we will present more training details, including the specific network architecture and the important hyperparameters. --- Rebuttal Comment 1.1: Comment: Thank the authors for their comprehensive response, which has effectively solved some of my major concerns. However, I do have an additional query regarding the performance of the post-train method. In Table 2 of the primary manuscript, the post-train method exhibits superior performance compared to EW and Auto_lambda. Nevertheless, when employing the more robust ViT-Base model, an intriguing shift occurs as shown in Table 2 of the global rebuttal. The post-train method's performance deteriorates, whereas the performance of the other two methods experiences a significant improvement. Could the authors kindly provide further elucidation regarding these observed outcomes? --- Reply to Comment 1.1.1: Title: Replying to Reviewer Comment: We sincerely appreciate your feedback and are pleased to see that our response has addressed many of your concerns. We are grateful for the opportunity to provide further clarification on the additional question regarding the performance of the post-train method. In $\text{Section 5.2}$, our experiment results on DomainNet show that the Post-train method has consistently exhibited a slight advantage over STL. This trend can be attributed to several factors that contribute to the generalization ability of Post-train: - **Influence of Single-Task Fine-Tuning**: In datasets with a substantial volume of data, such as DomainNet, the final performance of a model is **notably influenced by the last stage of training**, which involves single-task fine-tuning. Consequently, when STL performs well, the Post-train method also benefits from single-task fine-tuning, resulting in better performance. - **Implicit Regularization from Pre-training**: The initial pre-training phase serves as a form of parameter initialization, offering implicit regularization that aids in optimizing the model. This initialization effect contributes to the enhanced generalization capability of the Post-train method compared to STL. However, we acknowledge the intriguing observation you have made when applying the more robust ViT-Base model, as highlighted in $\text{Table 2 of global rebuttal pdf}$. In this case, there is a shift in performance dynamics, where the Post-train method's performance deteriorates while both the EW and Auto-$\lambda$ methods experience significant improvements. This phenomenon can be attributed to the interaction between auxiliary tasks and the target task, which has varying implications across different model architectures: **Impact of Auxiliary Tasks**: The introduction of auxiliary tasks brings about a dual effect on the model's performance. On one hand, the additional supervision signals contribute to a reduction in the risk of overfitting, enhancing generalization. On the other hand, the joint distribution shift between the auxiliary tasks and the target task can lead to negative transfer, impacting performance adversely. In different scenarios, both of these effects are present, but one effect might be more pronounced. **Impact of Model Capacity:** The influence of the dual effects of auxiliary tasks is dependent on the specific scenario and model capacity. - For instance, with limited model capacity, as is the case with the ResNet101 architecture on DomainNet, the influence of task conflicts induced by auxiliary tasks becomes more pronounced. This results in the performance of the EW method being inferior to both STL and subsequently Post-train. - Conversely, when employing the Vision Transformer model, which boasts increased capacity, the risk of overfitting with limited data becomes more pronounced. This makes STL less effective and consequently leads to the EW method outperforming STL, causing the Post-train method to fall short of EW and Auto-$\lambda$. Across different scenarios, there exists an equilibrium point between EW and STL, driven by the dynamic interplay between the positive and negative effects of auxiliary tasks. **This equilibrium point represents the central aim of our ForkMerge algorithm: to uncover the optimal balance between these effects.** We believe that the consistent patterns observed across various scenarios underscore the utility and potential of the ForkMerge algorithm in identifying this equilibrium and leveraging the strengths of both STL and EW methods. We hope that this additional explanation provides a clearer understanding of the observed outcomes, and we remain open to any further inquiries you may have. Thank you once again for your valuable insights and the opportunity to enhance the clarity of our work.
null
null
null
null
A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing
Accept (poster)
Summary: In non-linear compressed sensing, one would like to recover x from a series of observations y_i = f_i(a_ix); and in generative compressed sensing, x is drawn from a generative model. The contribution of this paper is a framework for uniform recovery with general nonlinear measurements and Lipschitz generative models; in particular, this framework includes Lipschitz generative models and dithered 1-bit measurements, which were not previously known to give uniform recovery. Strengths: The result is fairly general, handling dithered 1-bit or otherwise discretized measurements, as well as noise in the measurements. The bound is pretty good, basically ideal except possibly for some terms inside the log factor. Weaknesses: The first specific application the authors present for this framework is: (1) uniform recovery of (2) Lipschitz generative models from (3) dithered 1-bit measurements This is pretty specific, and if any one of the terms is relaxed (nonuniform, a ReLU generator, or non-dithered) then prior work covers it. That is to say: the result is nice and general, but prior work has pretty well covered the most interesting cases covered by this result. The writeup could be improved, repeatedly defining things in terms of terms that are defined pages later. For example, the main theorem (Theorem 1) relies on script(L), which isn't defined until the supplemental material; Assumption 3 uses section 3; Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Can you give a corollary for handling Gaussian noise? * What would you do for nonlinear sensing that's more nonlinear than discretization, e.g., sinusoidal or quadratic observations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive assessment of this paper and the useful comments and suggestions. We respond regarding the writing quality in the general response to all reviews, and respond to the other points as follows. (**If any one of the terms is relaxed (nonuniform, a ReLU generator, or non-dithered) then prior work covers it**) Thanks for the comment. We agree that some prior results have been established if any one of the three terms is changed, but we also note that any of the three terms is non-trivial rather than a straightforward extension: - As explained in the paper, it is often significantly more challenging to establish uniform guarantees compared to non-uniform ones. - Compared to ReLU networks studied in Qiu et al. (2020), our Lipschitz generative model is more general and requires different proof techniques. - Dithered 1-bit measurements and non-dithered 1-bit measurements are of very different characteristics, e.g., the former allows for norm recovery while the latter cannot; this can be seen by comparing our Corollaries 1 and 2. - Besides the 1-bit cases, we also mention that our Corollary 4, regarding dithered uniformly quantized measurements, is of practical interest and novel. Specifically, in the literature, there is no prior (uniform/non-uniform) guarantee for GCS under such a quantizer. Also, in a situation where we are allowed to sample several bits from each measurement, the uniform quantizer could retain more information and hence may be preferable. - We highlight the significance of a unified framework, which allows us to clearly see the ingredients (i.e., our Assumptions 2-4) that lead to sharp uniform recovery guarantees. We believe that this is also positioned as an important contribution. S. Qiu et al. "Robust one-bit recovery via ReLU generative networks: Near-optimal statistical rate and global landscape analysis." In ICML, 2020. (**Can you give a corollary for handling Gaussian noise?**) Since the non-linearities can be possibly random, Gaussian noise can be encompassed in our results for single index models (see Corollary 3). For the remaining quantization models, we may consider the case of adding Gaussian noise explicitly with the measurement model $\mathbf{y}=f(\mathbf{A}\mathbf{x}^*)+\mathbf{e}$ where $\mathbf{e}$ follows the $m$-dimensional standard Gaussian distribution. For this case, we will need to bound an additional term $\sup_{\mathbf{v}} \sum_{i=1}^m e_i \mathbf{a}_i^\top \mathbf{v}$, which is not a product process and is easy to handle compared to the current $\mathscr{R}_u$ (defined in the equation before Line 301). We will add a corollary for handling Gaussian noise in the revised version. (**What would you do for nonlinear sensing that's more nonlinear than discretization, e.g., sinusoidal or quadratic observations?**) Since the $\sin(\cdot)$ function is 1-Lipschitz continuous, the sinusoidal model where $f_i(x) = \sin(x)$ is encompassed by our single index model result (Corollary 3) where we can achieve accurate recovery without knowing $f_i$. For the quadratic model where $f_i(x)=x^2$ (which corresponds to the phase retrieval problem), we note that the important parameter $T$ (see Line 138) is zero, which means our results are not applicable to this model. However, we note that it is a common issue that classical single index models do not encompass the quadratic model, and phase retrieval models are typically studied separately from our sort of models. See, e.g., Page 5 and the Conclusion Section of Yang et al. (2017). We thank the reviewer for this interesting question and we leave the uniform recovery guarantees for generative model based phase retrieval for future study. Z. Yang et al. "High-dimensional non-Gaussian single index models via thresholded score function estimation." In ICML, 2017. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Would it be possible for you to give the corollary for Gaussian noise now? I'm curious whether it is reasonably tight. --- Reply to Comment 1.1.1: Title: A Corollary for handling Gaussian noise and its proof Comment: Thanks for your prompt response and the further question. Since it is a bit troublesome to display long equations in the OpenReview system, we provide the following anonymous link for the document regarding the corollary: **[EDIT: Link removed] Apologies, one author had missed the instruction not to include external links, so we have replicated the proof in a follow-up reply to this post instead** As we indicated in our initial response, we still have the sharp rate in the presence of Gaussian noise. The analysis is significantly simpler than that already done to handle other terms in our paper, so we view its addition as only a minor modification. The level of detail for this proof will be expanded slightly for the revised paper.
Summary: This paper introduced a unified framework for uniform signal recovery in nonlinear generative compressed sensing, in particular, 1-bit generative compressed sensing (GCS) and single-index models (SIM). The authors obtain uniform recovery guarantees for 1-bit GCS, 1-bit GCS with dithering, Lipschitz-continuous SIM, and uniformly quantized GCS with dithering. Experimental results are presented to corroborate the theoretical results. Strengths: 1. A unified framework for uniform signal recovery in nonlinear generative compressed sensing is proposed. 2. Uniform recovery guarantees are obtained for 1-bit GCS, 1-bit GCS with dithering, Lipschitz-continuous SIM, and uniformly quantized GCS with dithering. Weaknesses: 1. Both main statements and corresponding proofs build on the strong assumption that the target signal exactly belongs to the domain of generative models. Apparently, this is not the case for real-life compressed sensing, both theoretically and empirically. The key idea of generative compressed sensing is to recover an unknown target signal by leveraging the generative prior to capture its intrinsic structure. Nevertheless, it does not assume that the target signal is exactly generated by the assumed generative models. However, the goal of this paper is to prove the uniform recovery of a signal that is exactly generated by a known generative model, which is apparently not the case of real compressed sensing, or is a different problem. What if the target signal is not generated by the assumed generative model? 2. Given that this paper assumes that the target signal is exactly generated by a known generative model, how can the results be applied to practical generative models. For example, given two algorithms, one using a VAE prior, the other using a GAN prior, how to compare the corresponding reconstruction results? 3. Are the main results applicable to diffusion models? This paper assumes an L-Lipschitz continuous generative model and a radius r Ball in R^k. Recently, there are several recent studies on diffusion models for generalized linear inverse problems including 1-bit compressed sensing, for example: [R1] Meng, Xiangming, and Yoshiyuki Kabashima. "Quantized Compressed Sensing with Score-Based Generative Models." ICLR2023 [R2] Chung, Hyungjin, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, and Jong Chul Ye. "Diffusion posterior sampling for general noisy inverse problems." ICLR2023. [R1] discussed quantized compressed sensing with score-based generative models and [R2] introduces a unified diffusion posterior sampling method for general noisy linear and nonlinear problems. Can the main statements in this paper apply to the above latest studies with diffusion models? Please illustrate the main differences and explain why. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Unreasonable assumptions and lack of discussions of some latest works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your useful comments and questions. Regarding the signal lying exactly in Range(G) and applying to practical generative models, please see the general response above. (**Are the main results applicable to diffusion models?**) In this paper, our focus is on generative models with a low-dimensional latent structure of size $k \ll n$, which captures many important generative models of interest. However, diffusion models are very different in that they map $\mathbb{R}^n \to \mathbb{R}^n$ and create significant mathematical complications, e.g., Range(G) may be prohibitively large unless we only consider a small subset (e.g., one with high probability) of the latent space. Being unable to handle such models when studying the sample complexity of high-dimensional inverse problems is not specifically a limitation of our work, but rather a **substantial open problem that is yet to be tackled by anyone** (to our knowledge). Accordingly, we strongly believe that addressing this open problem should not be expected in a work of our nature. To back up our claims, we note that even papers specifically about provable recovery with diffusion models do not give sample complexity guarantees (e.g., see https://arxiv.org/abs/2307.00619 and https://arxiv.org/abs/2302.01217). Nevertheless, we thank the reviewer for pointing out these two interesting papers on diffusion models. We will cite these papers in a revised Conclusion and Future Work Section and mention that providing similar uniform sample complexity guarantees for diffusion models is an interesting future direction. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Many thanks for the rebuttal. I do agree that such a theoretical analysis is interesting and might be non-trivial. My previous concern is that even a tiny relaxation of the strong assumption that the target signal exactly lies in Range(G) might lead to a fundamentally different result for uniform signal recovery. Moreover, the assumptions applied to the generative model in the proof might be difficult to verify for practical generative models. As a result, it is suggested to state the results carefully and state these limitations more clearly. In addition, from my understanding, the so-called "proof-of-concept experiments" does not "corroborate our theory" as stated. (1) The main results of the theory (Section 2.3) are about the relationship between the required sample complexity and a prescribed estimated error, while there is no such evaluation or verification in the experiments. I mean, I did not see how the presented results can support the main theoretical results of the paper. (2) In the theoretical analysis, the generalized Lasso is considered. However, in the experiment parts, CSGM is considered. Then, how can the experiments corroborate the theory? Why not use generalized Lasso in the experiments? Another concern is that the authors stated that the main results apply to the case when the observation model f is unknown. This result is a bit surprising since it is extremely challenging to recover x when f is unknown. How can it share the same result as the case when f is known? I find it difficult to understand this from the current proof sketch in Section 3, nor did I see any experimental results for such a case in the appendix. Could the authors please explain this point a bit? --- Reply to Comment 1.1.1: Title: Follow-up response Comment: Thanks for the response and further questions, which we clarify below. We will also highlight the assumption of no representation error more clearly in the revised paper. **(Are the experiments running generalized Lasso?)** Please note that when we mentioned running CSGM, we meant using it to (approximately) solve Eq. (2.1), which is precisely the generalized Lasso. Eq. (2.1) is intractable to solve exactly, and the CSGM approach is to approximate it using gradient descent with random restarts. Previously CSGM was devised specifically for linear models, but generalized Lasso contains the exact same objective function and constraint (up to scaling by $T$; see Remark 5) as the original one for linear models, so CSGM remains applicable despite the non-linearity. This idea of approximating the exact optimization by gradient methods is extremely standard in the literature on CS with generative priors, e.g., see Bora et al. (2017), Dhar et al. (2018), and Liu et al. (2020) in our paper’s reference list. Thus, up to very standard practical approximations, **we are running generalized Lasso**. We will re-word to make this clearer. **(Do the experiments corroborate the theory?)** We believe that the previous response partially addresses this, but provide further discussion as follows. We chose the word “corroborate” to avoid overly strong language like “verify” or “confirm”, but we would be happy to tone this down further and replace it by a more precise statement, e.g., “to demonstrate that a practical variant of the generalized Lasso can be effective in recovering multiple signals with a single measurement matrix.”. Confirming the scaling laws experimentally is a challenging task, and to our knowledge it has never been attempted in the literature on CS with generative priors. Our experiments are aligned with those performed in similar kinds of works that consider generative priors (e.g., those mentioned above), with the distinction that we take the *worst case* performance over *batches* of images to better align with our goal of uniform recovery. With the above-mentioned re-wording and the removal of any suggestion that we are verifying our theory, we believe that they are a useful addition to the paper. Having said this, we emphasize that **by far** our main contributions are our theoretical results, so we hope that they will accordingly be the main factor in the final decision. (For comparison, other works on single index models (SIMs) typically have no experiments at all, e.g., Plan & Vershynin (2016), Genzel (2016), and the most relevant work Genzel & Stollenwerk (2023).) **(How can unknown $f$ be possible?)** From an experimental point of view, the fact that we run CSGM to approximate Eq. (2.1) and get good results supports the fact that this is possible. From a mathematical point of view, existing works demonstrated that the SIM with unknown nonlinearity $f$ (as considered in Part C of our Section 2.3) can be “transformed” into a linear model with an “unconventional noise term”. Specifically, Plan & Vershynin (2016)'s Section 4 demonstrates that if $f$ satisfies the condition that $\mu := \mathbb{E}_{g \sim \mathcal{N}(0,1)}[f(g)g] \ne 0$, then $\mathbf{y} = f(\mathbf{A}\mathbf{x}^*)$ (where $f$ is applied element-wise and $\mathbf{A}$ has standard Gaussian entries) can be written as $\mathbf{y}=\mathbf{A}\mu\mathbf{x}^*+\mathbf{w}$, with $\mathbf{w}$ satisfying $\mathbb{E}[\mathbf{A}^\top\mathbf{w}]=\mathbf{0}$ thus acting as an unconventional noise vector. Although the generalized Lasso approach is most naturally suited to conventional noise such as Gaussian, it turns out to still work under this unconventional noise. We omitted the above discussion because it is already well-documented in previous works, but we would be happy to use the extra available page (if accepted) to include an overview similar to the above paragraph.
Summary: The paper discusses a unified framework for uniform signal recovery in nonlinear generative compressed sensing. The authors utilized the use of generalized Lasso and Lipschitz approximation to allow for a lower sample size of measurements. Strengths: In what follows the strengths of the paper are given: 1) The paper presents a framework for deriving uniform recovery guarantees for nonlinear generative compressed sensing where the observation model is nonlinear and possibly discontinuous or unknown. 2) Utilizing the generalized Lasso and Lipschitz approximation allowed for a lower sample size of $\tilde{O} \left( \frac{k}{\varepsilon^2} \right)$, which is smaller than the previously best bound of $\tilde{O}\left( \frac{k}{\varepsilon^4}\right)$. 3) The idea of drawing back from using a concentration inequality (Lemma 5) in the appendix and instead leveraging the use of metric entropy to derive a tighter upper bound on $\mathcal{R}_u$ is quite interesting. 4) Finally, the paper is mainly theoretical and the experimentation done in the supplementary is to show confirm the findings of the paper, which are extensively shown. Weaknesses: The paper is somewhat hard to follow: 1) Some notations are used before they are defined. 2) Some explanations around the mathematical parts are not properly stated, e.g., transitions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Please expand on the transitions between equations. 2) How did you ensure that the assumptions that were presented in the paper hold in your experiments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of this paper and the helpful comments. Regarding readability, please see the general response above. (**How did you ensure that the assumptions that were presented in the paper hold in your experiments?**) We would like to highlight that this work is primarily theoretical, and the experiments are basic proof-of-concept and not a main contribution. Assumptions 2-4 are assumed solely for the theory, and we have verified in Corollaries 1-4 (with the details being provided in Appendix E) that these assumptions are reasonable and are satisfied by various nonlinear models. As for Assumption 1, in particular for the assumption of no representation error (i.e., the target signal is contained in the range of the generative model) and its practical effect, please refer to our general responses to all reviewers.
null
null
Rebuttal 1: Rebuttal: **General responses to the three anonymous reviewers** We are very grateful to the reviewers for their helpful feedback and suggestions. Our responses to the main concerns shared by multiple reviewers are given as follows. Other responses are given to each reviewer separately. (**The assumption that the target signal exactly belongs to the range of generative models**) We agree that relaxing the assumption of no representation error (i.e., the target signal lies exactly in the range of the generative model) is of significant interest. However, for high-dimensional single index models (SIMs) with the generalized Lasso method, it may be infeasible to obtain comparable theoretical guarantees upon doing so. To the best of our knowledge, all prior works in this line (high-dimensional SIMs with generalized Lasso type approaches) assume no representation error, even under simpler classical priors where the target signal is assumed to be exactly contained in a low-complexity structured set or for weaker non-uniform recovery guarantees. A partial reference list is provided below. In particular, appropriately handling the representation error for SIMs (with the generalized Lasso approach) has been mentioned as an open problem in the Discussion Section of the seminal work by Plan & Vershynin (2016). We also highlight the recent work of Genzel & Stollenwerk (2023), which is the most relevant work to ours and also considers a setting where the signal lies *exactly* in a known structured set. As is evident from the fact that these recent papers have been published in the topmost venues in machine learning, and/or as evidenced by the substantial citation counts of these works (also shown on the list), we believe that the assumption of no representation error has been widely accepted in the active research area of high-dimensional SIMs. We sincerely hope that the final score/decision for our submission will be based on the main goal and contributions of this particular paper, rather than on the general limitations in a broad and popular line of works. References: Y. Plan & R. Vershynin. "The generalized lasso with non-linear observations." IEEE Trans. Inf. Theory, 2016. [198 citations] M. Genzel. "High-dimensional estimation of structured signals from non-linear observations with general convex loss functions." IEEE Trans. Inf. Theory, 2016. [47 citations] Z. Yang et al. "High-dimensional non-Gaussian single index models via thresholded score function estimation." In ICML, 2017. [47 citations] X. Wei et al. "On the statistical rate of nonlinear recovery in generative models with heavy-tailed data." In ICML, 2019. [22 citations] C. Thrampoulidis & A.S. Rawat. "The generalized lasso for sub-gaussian measurements with dithered quantization." IEEE Trans. Inf. Theory, 2020. [24 citations] Z. Liu & J. Scarlett. "The generalized lasso with nonlinear observations and generative priors." In NeurIPS, 2020. [12 citations] M. Genzel & A. Stollenwerk. "A unified approach to uniform signal recovery from nonlinear observations." Foundations of Computational Mathematics, 2023. [New paper] (**Given the assumption that the target signal is exactly generated by a known generative model, how can the results be applied to practical generative models?**) We would like to highlight that this work is primarily theoretical, and the experiments are basic proof-of-concept and not a main contribution. We observe from the experimental results presented in the supplementary material that we can obtain accurate reconstruction for MNIST and CelebA images using a relatively small number of samples. This indicates that the generalized Lasso approach itself is still effective with representation error, although the theory is an interesting open problem. (**The paper is somewhat hard to follow or the writeup could be improved**) In the revised paper, we will ensure that all the notations are defined before they are used and add more explanations between equations to make the paper easier to follow. This will include the specific suggestions by Reviewer 6eZp, and those we find from our own careful proofreading. We also welcome any further specific pointers from all three reviewers. While we will strive to be meticulous with these edits, we are confident that they will still amount to relatively minor changes.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Faster Margin Maximization Rates for Generic Optimization Methods
Accept (spotlight)
Summary: This paper studies efficient algorithms for margin maximization with respect to a general geometry. The problem of margin maximization is interesting because it has been shown that common optimization algorithm such as gradient descent prefers such solutions through a well-known phenomenon called "implicit bias." This paper proposed a novel analytical framework by re-casting the problem as a bilinear game. The main results show that techniques from online learning can be applied to show fast convergence rates for various optimization algorithms such as mirror descent and steepest descent. Strengths: This paper brings an interesting angle of studying the properties of optimization algorithms through the lens of online learning and games. In particular, Theorem 1 cleanly encapsulates the reduction from optimization to online learning. I feel that this result has a lot of potential in quickly deriving new optimization guarantees by reusing tools from online learning. I also like the presentation of this paper that it clearly describes prior literature and makes conscious effort in recovering those results through the more general framework of this paper. This allows me to clearly understand the contribution of this paper and appreciate its versatility. Weaknesses: Notation is quite dense and can be difficult to parse. For example, 1. In section 4.1, it is stated that the mirror descent potential is restricted to $q$-norm for $q \in (1, 2]$. I think it would be better to bring this up sooner in the Preliminaries section. 2. $D_E$ is not a standard notation, why not just call it the KL-divergence? 3. I found that the terms $p$-norms and $q$-norms are used almost interchangeably, which might be confusing to readers. In Section 4.3, does the mirror descent potential suffer the same limitation as the results in Section 4.1? I have a hard time figuring this out. The theorem statements are very long and thus look unnecessarily intimidating. In particular, the notion of directional errors is almost entirely ignored in Section 4. I personally would love to see more discussion on the directional error. The term "rate" is used imprecisely in the introduction. Some of the prior literature measure their rate in terms of margin maximization, while some others in directional error. The author should do more to clarify this. Regarding the remark on eq. (5)'s computational efficiency. While it is true that its total amount of computation compares similarly to algorithm such as GD, I don't think taking norm of the entire weight vector is practical in a lot of cases, especially for deep learning where parallel or distributed computation is desired. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like to see some proof sketch and strongly recommend the author to add it to the revision. In particular, for Section 4.1, I still don't understand where the $q \in (1, 2]$ limitation comes into play. Does the same limitation apply to Section 4.3? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is purely a theoretical paper, so I don't think this paper has any immediate negative societal impact. I find the proofs to be generally easy to follow, but I did not follow all of them line-by-line. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive and constructive feedback. We will revise the notations and enhance the overall presentation as per your advice. Your specific questions and concerns are addressed below. >D_E is not a standard notation, why not just call it the KL-divergence? Thank you for your recommendation. We agree that using standard notation can enhance the readability of our work. In the revised version, we will replace D_E with KL-divergence. > I would like to see some proof sketch and strongly recommend the author to add it to the revision. In particular, for Section 4.1, I still don't understand where the p∈(1,2] limitation comes into play. We appreciate your interest in the derivation details, and we agree that a proof sketch would provide valuable insights. In the revised version, we will include a more comprehensive discussion in Section 4.1 to explain why the value of q must fall within the range (1,2]. The key to understanding this is that when q is in the range (1,2], it ensures the (q-1)-strong convexity of $\ell_t(w)$, which in turn ensures a sufficiently small regret bound for the w-player. It's important to note that, as observed in Theorem 7 (Appendix C), the p-player has a significantly larger (potentially linear) regret bound. Therefore, it is crucial for the w-player's regret bound to be negative and small enough to offset terms in the final bound. >In Section 4.3, does the mirror descent potential suffer the same limitation as the results in Section 4.1?/ Does the same limitation apply to Section 4.3? Indeed, the same limitation applies to the results presented in Section 4.3. As outlined in Theorem 6 (Section 4.3), we assume the norm $||\cdot||$ to be such that $||\cdot||^2$ is strongly convex. Although this is slightly more general, it essentially confines us to cases where q falls within the range (1,2]. We will make sure to emphasize this point in the revised version for clarity. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It would nice if you offer a preview of the additional discussion you intend to add in the revision. I do not feel comfortable with moving my score before then. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you very much for your prompt feedback! The following is a preview of the discussion we would like to add: **Regarding the range of q**: In the original paper, we discussed the basic idea of why sublinear regret can be achieved (Lines 216-224): > This is an interesting and unusual design because the regularized greedy algorithm will clearly suffer a worst-case linear regret for the p-player. Luckily, we find that for our specific problem, the dominating term of the p-player’s regret bound can be canceled by the w-player’s regret bound, which is negative as the corresponding algorithm used is clairvoyant, i.e. can see the current loss `t before making a decision at round t. This ensures that sublinear (and more generally fast) rates are possible. In the revised version, we intend to add a remark below this paragraph and be clear about the role of strong convexity. Specifically: To be more concrete, for the p-player, we show it guarantees a data-dependent regret bound: $$ Reg_T^p = O\left(\sum_{t=2}^T\frac{(t-1)(q-1)}{2}||w_t-w_{t-1}||_q^2 + \log n\log T\right), $$ which can be as worse as $O(T)$. On the other hand, for the w-player, the regret is bounded by $$ Reg_T^w = O\left(-\sum_{t=2}^T\frac{(t-1)(q-1)||w_t-w_{t-1}||_q^2}{2}\right) $$ which cancels the leading term in $Reg_T^p$ and leads to a small $C_T$. Note that, FTL$^+$ (w-player's online algorithm) can only achieve such a bound when q\in(1,2], as in this case $\ell_t(w)$ is $(q-1)$-strongly convex. If $q>2$, the strong convexity not longer exists. In this case, FTL$^+$ only ensures a zero regret bound, which is insufficient to achieve a sub-linear C_T. **Regarding Section 4.3**: Following your question, we will add the following discussion at Line 299: Finally, we note that, Theorem 6 requires $||\cdot\||^2$ to be strongly convex, which is satisfied when for q-norm with $q\in(1,2]$. **Regarding the directional error**: In the original paper, we removed the conclusion on the directional error due to lack of space. In the final version (with more spece), we will add them back to the main theorems. One preview can be found in Theorem 7 of Appendix C, which contains the bounds on the margin and directional error. **Regarding the computational complexity**: Following your suggestion, we will add the following after Line 207: we note that, the p-MD algorithm of Sun et al., (2022) does not need to compute the norm of the decision at each round, which can be more efficient in real-world applications where parallel or distributed computation is desired.
Summary: This work gives a unified perspective on maximal margin problems realized by a wide range of optimization algorithms. It mainly covers three cases: (i) steepest descent under a general norm, (ii) mirror descent, and (iii) momentum-based acceleration. The authors collectively refer to them as generic optimization problems. The essential point of the theory is that they translate these generic optimization problems into a regularized bilinear game with online learning. This enables us to derive relatively tight margin maximization rates and to characterize the implicit bias of each algorithm. Strengths: This work is based on a concrete and rigorous theoretical analysis of a wide range of optimization problems. One interesting point is that all of them can be mapped in the regularized bilinear game formulation. It gives us a unified perspective of various optimization problems. In addition, this unified formulation enables us to obtain tight convergence rates compared to the previous work. Weaknesses: While this work covers various optimization problems and an interesting formulation with the bilinear game, it might be possible to view this as a collection of results that are not particularly novel. The convergence rates of the problems (for example, (i) steepest descent under a general norm, (ii) mirror descent, and (iii) momentum-based acceleration) have been studied although they seem looser than those obtained in the current work. The game formulation has been also provided by some previous work ([Wang et al 2021, 2022b]) in a limited situation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Difference from previous work** Since this work covers various topics of optimization problems, it seems hard for beginners in this research area to judge at what points it is significantly novel compared to the previous work. In particular, the difference from Wang et al 2022b seems to need more clarification. Did this previous work not address the max-margin rate of all of (i) steepest descent under a general norm, (ii) mirror descent, and (iii) momentum-based acceleration problems? **Exetention to other generic loss functions** It is quite curious whether the idea of using the bilinear game can be extended to other loss functions except for the exponential loss. Is there any work or evidence that the current proof approach could potentially be applied to other loss functions? Otherwise, is it quite specific to the exponential function? **Tightness compared to numerical experiments** Although I agree that the theoretical evaluation of the bounds itself contributes to enriching our understanding of the problems, it remains unclear how tight the obtained convergence rate is. I mean, there would be the possibility that the obtained bounds could be much loose compared to the real optimization in experiments. The authors show no empirical confirmation and this makes a bit unclear the superiority of the fast convergence rates obtained in this work. It is not a major flaw, though. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As I mentioned in Questions, one limitation is that there is no comparison with numerical experiments and tightness (or "true" implicit bias) seems unclear. The other is the exponential loss as is mentioned in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Difference from Wang et al 2022b seems to need more clarification. Did this previous work not address the max-margin rate of all of (i) steepest descent under a general norm, (ii) mirror descent, and (iii) momentum-based acceleration problems? Thank you for the constructive suggestion; we will add a more detailed discussion in the revised version. Compared with Wang et al (2022b), we note that: 1) Wang et al., (2022b) draws the connection between Nesterov-accelerated GD for ERM and solving the bilinear game through an online dynamic. However, it was unclear whether this kind of analysis suits other gradient-descent-based methods, and generic optimization methods such as mirror descent/steepest descent was not addressed at all. We observe in this work that the non-linearity of the mirror map in generic optimization methods, such as mirror descent and steepest descent, makes the analysis particularly challenging. In this paper, we reveal that the game framework can in fact encompass implicit bias analysis for a range of generic optimization methods, and offer a more streamlined and unified analysis. 2) Wang et al., (2022b) also proposed an accelerated p-norm perceptron problem. However, they only demonstrated that the algorithm could achieve a non-negative margin, leaving open questions regarding whether the margin can be maximized (i.e., converge to $\gamma$), and if so, what the margin maximization rate would be; 2) They only presented the online dynamic, without its equivalent optimization form under ERM. > It is quite curious whether the idea of using the bilinear game can be extended to other loss functions except for the exponential loss. Is there any work or evidence that the current proof approach could potentially be applied to other loss functions? Otherwise, is it quite specific to the exponential function? Thank you for this intriguing question, and we will add a more detailed discussion in the revised version. please refer to the first question in the general response for our answer to this question. > Tightness compared to numerical experiments We appreciate your valuable suggestions. While our paper's primary focus lies in the theoretical aspects of implicit bias analysis, we concur that supplementing these theories with numerical experiments would enhance our work. Accordingly, we will introduce experiments in the revised version to substantiate the efficacy of our proposed methods. --- Rebuttal Comment 1.1: Comment: Thank you for your kind response and clarification. >we will add a more detailed discussion in the revised version. Compared with Wang et al (2022b), ... >Accordingly, we will introduce experiments in the revised version to substantiate the efficacy of our proposed methods. I am looking forward to seeing them in the final version. I guess that even very brief experiments will become informative for subsequent works. Since I understand that the lack of such empirical observation does not flaw the main contribution, I feel comfortable keeping my score on the accept side.
Summary: The paper studies the implicit bias of generic optimization methods, which plays a key role in understanding their generalization capabilities in settings with multiple solutions. The authors propose a new game framework to derive margin and directional error rates, which consists of transforming the optimization method into an equivalent instance of a 2-player margin-maximization game. Afterwards, margin and directional error rates can be derived directly from the players’ average online regret. They show equivalent transformations (under exp loss) from weighted MD with a squared p-norm potential and SD under a general norm to instances of the proposed 2-player game, yielding convergence rates from each transformation’s induced regret. More aggressive learning rates are also studied and shown to be able to improve convergence rates. The paper shows, through their game framework, that even faster rates can be achieved with Nesterov MD and a form of momentum SD. Strengths: The paper is clear and well-written, adopting consistent, well-defined notation / objects and presenting assumptions, definitions, and results in a local and organized fashion. Its novelty and contribution seem significant: The proposed game framework seems to be a novel approach to study implicit biases (margin / directional error rates). By reducing rate analyses to the task of finding equivalent transformations to the 2-player game, it might facilitate the study of new method’s implicit biases and hence prove useful for future research. It provides a fresh perspective and, as shown in the paper, accommodates different forms of MD and SD, suggesting that it might be able to capture different first-order methods as well. The transformations, although technical and non-trivial, provide valuable insights on the behavior of MD and SD. The use of clairvoyance for player w to show form equivalence is also interesting and might be useful for new proof techniques, but I cannot assess its novelty since I am not very familiar with the related work. The improved margin rates and new accelerated methods given in Section 4.3 are also impactful and suggest that momentum might be crucial to speed up (general) margin maximization – from what I understand the community only had evidence of this for $p=2$ (Ji et al’21). From my understanding, the paper shows the first $1/T^2$ margin rate for $p \in (1,2)$, which seems to be a significant contribution. Weaknesses: The last page or two seem to have been written a bit in a hurry, and little space has been allocated to Section 4.3 compared to its significance. However, since these can be easily fixed in a revision I will not account for that when rating the paper. Although I believe the paper has enough contributions, it would be interesting to have some small (even if synthetic) experiments with margin by time curves for different methods and learning rates regimes. Minor fixes: L51: log t -> log T? L{276, 285, L291, L293, L295}: Algorithm 4.3 -> Algorithm 4 L516 proof -> prove Should state in Section 4.3 that proof of Theorem 6 is in Appendix E Eq 6 and, Eq 7: period instead of comma after equation Eq 8: missing period after equation Equation at end of page 6: colon instead of period before eq, period after eq. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is the potential in the left box of Algorithm 4 the general norm squared? This seems to be the case but I think it is not specified in Section 4.3. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are properly acknowledged, e.g. applicability of the framework only to exp loss and showing method-game equivalence might take effort and be non-trivial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate your constructive review and supportive comments on our work! We will refine the presentation and rectify the typos in line with your suggestions. For clarity, we will specify in the revised version that the potential in the left box of Algorithm 4 refers to the general norm squared. --- Rebuttal Comment 1.1: Comment: Thanks for the response!
Summary: This paper studies the implicit bias of the generic optimization method such as mirror descent and steepest descent. By transforming the generic optimization algorithm into an online learning dynamic, this paper shows the accelerated rate and offers a new perspective. Strengths: 1. This paper is well-written. 2. The theoretical analysis is solid and explicitly explained Weaknesses: The major weakness is that this paper only focuses on exponential loss. In addition, choosing the hyperparameters is sometimes difficult. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the author provide some simple experiments to verify the correctness of the theoretical results? For example, using the synthetic dataset with its max-margin solution known. 2. For the P play, will solving the subproblem lead to a larger complexity? 3. What is the meaning of $D_E$ in Algorithm 3? Typos: Algorithm 4.3 should be Algorithm 4. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: OK Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the review and positive comments! > Can the author provide some simple experiments to verify the correctness of the theoretical results? For example, using the synthetic dataset with its max-margin solution known. Thank you for the constructive suggestion; as we discuss in the second point of the general response, we will add numerical experiments in the revised version. > For the P play, will solving the subproblem lead to a larger complexity? Thank you for the question. We note that the online/game framework is only introduced for theoretical analysis, and we do not need to actually run the corresponding online algorithms. Therefore, the p-player will not introduce extra computational complexity. We will make this clearer in the revised version. > What is the meaning of D_E in Algorithm 3? Thank you for the question. D_E stands for the Bregman divergence with respect to the negative entropy regularizer. We will make it clear in the notation part.
Rebuttal 1: Rebuttal: We are deeply grateful to all reviewers for their positive feedback and valuable suggestions. We commit to implementing your advice to refine our paper, and some of the frequently asked questions are addressed below. >Do you believe a similar approach could be used beyond the exponential loss? We appreciate this insightful question and will include a more comprehensive discussion on this topic in our revised version. Our primary aim in this paper is to present a unified analysis framework for understanding the implicit bias phenomenon and to provide faster convergence rates for a variety of generic optimization methods. We posit that the analysis of exponential loss serves as a first step, and the application of the game/online learning framework bears the potential to be broadened to incorporate an analysis of a more diverse array of loss functions. The current choice of exponential loss is particularly apt for the game/online dynamic analysis as it relates closely to the classical Hedge algorithm in online learning, which also employs the exponential function to measure experts' losses. To broaden our scope to more general functions, we could contemplate replacing the exponential loss in the Hedge algorithm with other losses (e.g., using the “Polynomially Weighted Average Forecaster”, given in Corollary 2.1 of Cesa-Bianchi & Lugosi (2006)), and attempt to establish a link between this online dynamic and optimization methods for more general loss functions. Our preliminary investigations suggest that this avenue is promising but highly non-trivial, and warrants further exploration in future work. > Tightness measured through numerical experiments in addition to theory. We value your constructive suggestions. While our paper primarily focuses on the theoretical aspects of implicit bias analysis, we concur that complementing our theoretical findings with numerical experiments would provide a more robust validation of our work. Consequently, we intend to incorporate experiments in the revised version to demonstrate the effectiveness of the proposed methods. Cesa-Bianchi, N., & Lugosi, G. Prediction, learning, and games. Cambridge university press 2006.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work introduces a novel method to derive margin maximization and directional error rates for generic optimization methods. The method consists in finding a reformulation of the regular optimization as a minmax bilinear game. Rates can then be derived using online learning techniques. The authors demonstrate (Thm. 1) how solving a min-max regularized bilinear objective using online learning is maximizing the margin. They also derive (Thm. 2,3,5) the minmax formulation of mirror descent and steepest descent. Using their method, several new margin-maximization and directional error rates for mirror descent (average iterate) and steepest descent are derived. Moreover, rates are also derived for accelerated methods (Thm. 4 and 6). Strengths: Originality & Significance: While not being entirely familiar with all the prior works, I find the main idea presented in this paper novel. The proposed rates are improving upon prior works and the approach seems to be able encompass a wide range of optimization techniques, making it a significant contribution in my opinion. Clarity: I found the paper easy to read despite its density. Some small typos: in Figure 1, this should be max min and not max max. Weaknesses: The authors ask the question whether generic optimization methods can achieve faster rates than GD. It seems the proposed rate for steepest descent is matching the GD rate, but the question is still open for mirror-descent. It is unclear how to conclude for mirror descent as you are deriving an average iterate rate that is difficult to compare to existing last iterate rates. Usually average iterate rates are better than last iterate rates. It is unclear how good those rates can become. Is there a lower-bound you could comment on? This work provide a unifying method to obtain margin maximization rates for linear models with exponential loss. It is unclear how the approach could be extended to different losses. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What would it take to remove the $\log(T)$ term in the $\mathcal{O}(\frac{\log n \log T}{(q-1) T})$ for mirror descent? For mirror descent, what is preventing the analysis to be extended to the last iterate? What would a comparison between your mirror descent rate and existing GD rates look like in the average iterate regime? Do you believe a similar approach could be used beyond the exponential loss? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations have been discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are profoundly grateful for your positive evaluation of our work and your detailed, constructive feedback. We address some of your specific inquiries below: >What would be required to eliminate the term $O(\log T)$ in the $O(\frac{\log n\log T}{(q-1)T})$ for mirror descent? We appreciate this insightful question. Following the proof structure of Corollary 3, if we fix the total number of iterations $T$ in advance, we can set $\alpha_t$ to $T$, thereby eliminating the $\log T$ term. However, as it does not make sense to fix $T$ in an optimization application of online learning, we set $\alpha_t$ to either $1$ or $t$, both of which introduce a $\sum_{t=1}^T \frac{1}{t}$ term, resulting in the $\log T$ factor. We will elaborate on this point in our revised version. >Regarding mirror descent, what hinders the analysis from being extended to the last iterate? This is an excellent question. At present, the primary challenge lies in the non-linearity of the mirror map, namely $\nabla \Phi(\sum_{t=1}^Tw_t)\not=\sum_{t=1}^T \nabla\Phi(w_t)$. While the equality holds for $q=2$, it does not when $q\in(1,2)$, posing difficulties for various aspects of the proof such as obtaining a lower bound on the weighted sum of $w_t$ and proving algorithm equivalence. We discovered that by allowing the p-player to implement a specialized algorithm (namely, regularized greedy as defined at the bottom of page 6), we can derive a weighted average version of MD via algorithm equivalence analysis, and the issues mentioned earlier can also be addressed in a nuanced manner. We will provide a more thorough discussion on this issue. > How would a comparison between your mirror descent rate and the existing GD rates appear in the average iterate regime? Thank you for the question. When $q=2$, our average MD corresponds to an average version of GD and exhibits an $O(1/T)$ rate, which is similar to the optimal rate of last-iterate GD. Conversely, when $q=2$, our last-iterate steepest descent algorithm also simplifies to last-iterate GD, thereby demonstrating that our framework is versatile and can be used to analyze last-iterate GD. > Do you believe a similar approach could be used beyond the exponential loss? Thank you for raising this point. Please refer to the first question of the general response for our answer to this inquiry. --- Rebuttal Comment 1.1: Comment: Thank you those clarifications!
null
null
null
null
null
null
Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity
Accept (poster)
Summary: There is a continuing effort to understand the complex training dynamics and loss landscape of neural networks, and one of the most interesting discoveries is Linear Mode Connectivity (LMC). LMC is the phenomenon that when two different solutions are linearly interpolated in parameter space, the training loss and test loss remain low enough to be similar to the solution. In this paper, the authors propose Layerwise Linear Feature Connectivity (LLFC), a stronger linear connectivity than LMC. LLFC means that the feature maps of all layers of two differently trained solutions are also linearly connected. Through various experiments, they show that LLFC is satisfied when spawning and permutation methods, which are common methods for presenting LMC characteristics, are used. In addition, they advance the understanding of LMC with a theoretical explanation of the underlying factors that make LLFC appear naturally. Strengths: - The paper introduced LLFC, an interesting phenomenon that extends LMC, linear connectivity in parameter space, to linear connectivity in feature space. - The empirical results are comprehensive and contain the necessary contents for the development of the discussion. - The theoretical analysis provides a convincing explanation for the emergence of LLFC. Weaknesses: - The authors only ran their experiments on MNIST and CIFAR10, which are relatively easy datasets. It will be important to verify its validity on larger datasets such as ImageNet. Git Re-Basin (Ainsworth at el., 2023) also showed a relatively large loss barrier on ImageNet, so validation on a wider range of datasets is required. ----- (Ainsworth at el., 2023) [Git Re-Basin: Merging Models modulo Permutation Symmetries](https://openreview.net/forum?id=CQsmMYmlP5T) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I'm not sure how low the value of the sparsity measure is in Figure 4. Is a value of ~0.3 considered sparse? - In the experiments, the values measured with random variables, etc. are used as baseline, but it would be more accurate to compare them to the values for a model that is not linearly connected. The baseline values currently presented seem to exaggerate the results because they are comparing too different components. - Just a question, do you think the ensemble performance would be improved when ensembling linearly interpolated features with the modes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I see no potential negative societal impact from this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Ask for additional experiments on larger datasets such as ImageNet. “The authors only ran their experiments on MNIST and CIFAR10, which are *relatively easy* datasets. It will be important to verify its validity on larger datasets such as ImageNet.”** **A1**: Thank you for great suggestions. To begin with, we want to clarify that the purpose of our main experiments is to verify that LLFC co-occurs with LMC. Because the permutation methods cannot achieve the zero-loss-barrier LMC on ImageNet dataset [1], we can not verify that LLFC co-occurs with LMC in this particular instance. However, we take note that Frankle et al. [8] observed LMC on ResNet-50 trained on ImageNet dataset using the spawning method. Therefore, we follow your suggestion and conduct additional experiments on larger datasets using the spawning method. Given the temporal constraints of the Rebuttal period and the limitations of computational resources, we opted for the Tiny-ImageNet dataset [Cite 1], which is smaller but harder challenge than ImageNet. Our experiments adhere to the identical training configurations as those outlined in Frankle et al. [8]. We apply spawning method to obtain the two linearly connected modes, $\boldsymbol{\theta_A}$ and $\boldsymbol{\theta}_B$. Subsequently, consistent with the experimental settings of the main paper, we evaluate both ${\rm cosine}\_{\alpha}(\boldsymbol{x}_i)$ and ${\rm cosine}\_{A, B}(\boldsymbol{x}_i)$ for each data point $\boldsymbol{x}_i$ in the test set $\mathcal{D}$. In Figure 1 (global response), the values of $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{\alpha}(\boldsymbol{x}_i)]$ consistently approximate zero in contrast to $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{A,B}(\boldsymbol{x}_i)]$ across different layers and different values of $\alpha$. This further strengthens our argument, confirming the co-occurrence of LLFC and LMC. For more experiment details, please kindly refer to our global response. [cite 1] Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015 **Q2: Ask for a baseline of sparsity for comparison. “I'm not sure how low the value of the sparsity measure is in Figure 4.”** **A2**: Thank you for great question. We follow your suggestion and conduct new experiments to add baselines of sparsity for comparison. We choose the pre-activations of random initialized networks as our baseline. We measure the sparsity of the pre-activations of both well-trained networks and random initialized networks, using $S(\boldsymbol{x}) = \frac{\|\boldsymbol{x}\|_1}{n\|\boldsymbol{x}\|\_{\infty}}(\boldsymbol{x} \in \mathbb{R}^n)$, denoted as $S(\tilde{\boldsymbol{h}}\_{i, \text{end}})$ and $S(\tilde{\boldsymbol{h}}\_{i, \text{init}})$ respectively. In Figure 3 (global response), the values of $S(\tilde{\boldsymbol{h}}\_{i, \text{end}})$ are relatively small compared to $S(\tilde{\boldsymbol{h}}\_{i, \text{init}})$, thus providing further supports for our sparsity claim. For more experimental details, please kindly refer to our global response. **Q3: Ask for more baseline for verifying weak additivity condition. “In the experiments, the values measured with random variables, etc. are used as baseline, but it would be more accurate to compare them to the values for a model that is not linearly connected.”** **A3**: Thank you for your great suggestion. We follow your suggestion and conduct additional experiments to comparing with models that are not linearly connected. Specifically, we compare $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B})$ with $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ and $\text{Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$. Here, $\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B}$ denote the pre-activations of two linearly connected mode $\boldsymbol{\theta}\_A$ and $\boldsymbol{\theta}\_B$, while $\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D}$ denote the pre-activations of two independently trained mode $\boldsymbol{\theta}_C$ and $\boldsymbol{\theta}_D$. Meanwhile, $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ are still independent $d\_{\ell}$-dimensional random vectors sampled from $\mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$. In Figure 3 (global response), the values of $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B})$ are negligible in comparison to $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ and $\text{Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$, which further validate the weak additivity condition for linearly connected models. For more experiment details, please kindly refer to our global response. **Q4: Question on ensemble performance. “Just a question, do you think the ensemble performance would be improved when ensembling linearly interpolated features with the modes?”** **A4**: Thank you for the interesting question. While we do not have a definitive answer to this question, we do wish to highlight several studies that bear relevance. For example, as explored in Section 5.4 of Ainsworth et al. [1], an interesting observation arises when training two models on distinct splits of the dataset. In this context, the performance of a linearly interpolated model fails to match that of an ensemble of two models with twice the number of effective weights. This intriguing issue holds potential for further investigation and we can get insights to enhance the design of more effective ensemble methods. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and revisions to the result as I suggest. I believe that LLFC is an interesting phenomenon and will be helpful to many researchers.
Summary: The work identifies layerwise linear feature connectivity (LLFC) and proves LLFC is sufficient for linear mode connectivity (LMC) between neural networks. Experimental evidence using two methods for finding LMC networks (spawning and permutation) finds that LMC and LLFC co-occur in a variety of models and datasets. Two conditions (weak additivity of ReLU and commutativity) are identified as sufficient for LLFC, and experimental evidence is presented for these conditions occuring in LMC networks. Strengths: The paper unifies two lines of research into LMC (spawning versus permutation methods) under a common mathematical framework. The notion of commutativity is an interesting derivation and not as obvious as it seems at first glance. The paper is well written and easy to follow, with clear definitions. If established comprehensively, LLFC would be a significantly stronger condition than LMC, and would have far-reaching implications. Weaknesses: Main issues: - Current alignment algorithms for permuting LMC networks already directly optimize for LLFC (section 5.3), so the finding that LLFC implies LMC (Lemma 1) is formalizing a well-established phenomenon which is somewhat obvious. - As a result, LLFC implying LMC is somehow less interesting than LMC implying LLFC, but the latter direction is not explored. Only co-occurence is observed experimentally, making the direction of causation unclear. In particular, if LMC is found to imply LLFC, the reasons are likely to be extremely informative. - For spawning LMC networks, it is unclear how the early training epochs contribute to enabling LMC. The experiments do not consider the evolution of LMC networks through training time. Furthermore, the connection between figure 6 and the equation in lines 285-286 is unclear (see questions below). Additionally, the experimental evidence lacks baselines for comparison. In general, a fair comparison should include interpolated non-LMC networks. - Figure 2 and 3: no cosine similarity for interpolated networks that are not linearly connected. Since interpolating between any vectors (including random ones) increases their cosine similarity considerably, it is not clear that the increase in similarity is due to LLFC and not averaging. - Figure 4: no sparsity baseline for random networks. Currently it is unclear what is considered "small" in line 232. - Figure 5: no commutativity distance for non-LMC networks Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Section 5.2: how does similarity of $U_A$ and $U_B$ imply commutativity when $W = U \Sigma V$ also depends on $V$, which could differ significantly between two networks? The connection between figure 6 and the equation in lines 285-286 is unclear. - How much would solving the quadratic assignment problem improve over existing algorithms? Given that activation similarity is closely correlated with weight similarity, it is not obvious whether minimizing the QAP will lead to significant improvements in permutation alignment versus aligning weights and/or alignments, which is equivalent to minimizing one or both sides of equation 9 separately. - Weak additivity seems to rely heavily on the ReLU function. Does LLFC occur for other activation functions? - Do networks exist where LMC holds but not LLFC? If so, how can one find such networks and how common are they? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Overall the findings could be explored in greater depth and better related to observable phenomena. While LLFC is shown to imply LMC, the reverse is not shown or refuted. For spawned LMC networks, it is not clear whether LLFC is a byproduct of averaging, how LLFC relates to the onset of LMC at an early point in training, or how LLFC can be increased/decreased. For permuted LMC networks, current alignment algorithms (e.g. Ainsworth et al. 2022, Singh & Jaggi 2020) are already explicitly optimizing for LLFC (via weight or activation alignment), and thus the findings have limited explanatory power. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: “Current alignment algorithms for permuting LMC networks already directly optimize for LLFC (section 5.3), so the finding that LLFC implies LMC (Lemma 1) is formalizing a well-established phenomenon which is somewhat obvious.”** **A1**: We'd like to emphasize that the interpretation that current alignment algorithms optimize for LLFC (Sec 5.3) is a new finding and one of the key contributions of this paper. This provides a deeper theoretical support for the alignment algorithms, which was absent in existing literature. See Reviewer Dt29's comments: "The fact that both the weight-matching and activation-matching losses can be found in the commutativity condition is quite neat, as (to the best of our knowledge) these two methods were proposed as heuristics." We respectfully disagree that any of our findings were well-established, since the notion of LLFC never appeared in any prior work. We'd also like to clarify the main contributions and logic progression of this paper: - We formulate LLFC, which is a stronger generalization of LMC (Lemma 1). - We empirically find that LMC networks always satisfy LLFC in practice, without exception (Sec 4). - We identify two conditions (weak additivity and commutativity) which imply LLFC, and verify them empirically. Together, our work establishes LLFC as a more fine-grained and fundamental phenomenon than LMC. This novel perspective contributes to the new understanding of LMC and the spawning and permutation methods. Lemma 1 is only to formally verify that LLFC is stronger than LMC, but is not the main technical contribution of this paper. **Q2: “LLFC implying LMC is somehow less interesting than LMC implying LLFC, but the latter direction is not explored. Only co-occurence is observed experimentally, making the direction of causation unclear…”** **A2**: We have done a significant amount of experiments to verify "LMC implies LLFC in practice", i.e., all existing LMC networks satisfy LLFC, without exception (Sec 4). This shows that LLFC is a more fundamental phenomenon that may be the underlying cause of LMC in practice, and we make use of this perspective to obtain new insights (Sec 5). For the reverse direction, it is unlikely to establish LMC implies LLFC unconditionally, because LMC is a global property that only concerns the network output and LLFC characterizes the finer details in each layer in the network. **Q3: “For spawning LMC networks, it is unclear how the early training epochs contribute to enabling LMC. The experiments do not consider the evolution of LMC networks through training time.” and “how LLFC relates to the onset of LMC at an early point in training”** **A3**: Thank you for the comment. In Sec 5.2, we find that two spawned networks share similar principal directions in model weights, which contributes to the satisfaction of commutativity condition, thus leading to the satisfaction of LLFC and thereby LMC. This indicates that the early training epochs determine the top principal directions of the weights. In light of your suggestion, as shown in Figure 6 (global response), we conduct new experiments to evaluate the relationship between the number of shared early training epochs and the similarity between principal directions of spawned networks. We kindly refer you to the global response for more experimental details. **Q4: About baselines for comparison. “Figure 2 and 3: no cosine similarity for interpolated networks that are not linearly connected…”, “Figure 4: no sparsity baseline for random networks…” and “Figure 5: no commutativity distance for non-LMC networks…”** **A4**: Thank you for your suggestions. As shown in Figure 2 to 4 (global response), we conduct new experiments to add baselines for comparison. We kindly refer you to the global response for more experimental details. **Q5: “Section 5.2: how does similarity of $U_A$ and $U_B$ imply commutativity when $W=U\Sigma V$ also depends on $V$, which could differ significantly between two networks?...”** **A5**: Thank you for your comment. As shown in Figure 5 (global response), we have verified the similarity between $V_A$ and $V_B$, and had the same observation as $U$. We kindly refer you to the global response for the experimental details. We will add this to the final version. **Q6: “How much would solving the quadratic assignment problem improve over existing algorithms? …it is not obvious whether minimizing the QAP will lead to significant improvements…”** **A6**: Solving QAPs is an NP-hard problem (Appendix C). Designing an efficient practical algorithms to QAPs is a challenging open problem on its own, which is beyond the scope of this paper. While we are currently unable to solve the QAP corresponding to Eq. (9), it is possible that solving the QAP will allow LMC to happen under weaker conditions (e.g. under a smaller network width compared to Git Re-Basin) because it enlarges the solution space compared with activation and weight matching. **Q7: “Weak additivity seems to rely heavily on the ReLU function. Does LLFC occur for other activation functions?”** **A7**: We did some preliminary experiments which showed that LMC and LLFC are difficult to achieve for other activation functions like tanh and sigmoid, so activation does play a role in LMC/LLFC. Since ReLU is the activation used in standard architectures including ResNet, VGG, etc. and is used in all previous work on LMC, an analysis based on ReLU is already highly informative given its wide adoption. We leave the exploration beyond the ReLU activation as a future direction. **Q8: “Do networks exist where LMC holds but not LLFC?…”** **A8**: No, we did not find any networks where LMC holds but not LLFC. We have conducted extensive experiments across diverse datasets, and model architectures (Sec 4 & Appendix D.2). --- Rebuttal Comment 1.1: Comment: Thank you for the extensive revisions to the figures to include convincing baselines. I have increased my score accordingly. Despite the straightforward connection between weight/activation matching and LLFC, I concede that LLFC is a useful formalization of these heuristics. However, given that permutation methods are still basically directly optimizing for LLFC, I am still skeptical of the insights gained into LMC by this work. In particular, I would appreciate a more thorough exploration of how the theoretical findings relate to practical applications in the following points. 1) Weak additivity: given that you already have preliminary results on different activation functions, I would have liked to see a comparison of the LMC of networks relative to the degree to which their activation functions are weakly additive. The point is not just to use widely-adopted activation functions, but to experimentally verify that weak additivity is an important condition for LLFC and/or LMC. For example, if it were possible to get LMC with non-ReLU activations despite not having LLFC, this would be extremely informative on the reverse direction (LMC implying/not-implying LLFC). 2) Minimizing QAP: the point is not to solve QAP, but to ask how much of an advantage QAP has only solving the LAP for activations or weights, since in the limit (infinite width or networks that are exact permutations of one another) all of these are equivalent. In particular, the positive terms of the QAP can simply be solved as a bigger LAP, so the only question remaining is how large the negative cross terms are relatively speaking. But there may be reasons for these cross terms to be small, hence guaranteeing the effectiveness of the heuristics - e.g. conceptually weak additivity and commutativity seem to be findings in a similar vein. Alternatively, maybe the cross terms are significant and reveal where the heuristics fail. I would greatly appreciate more discussion of these cross terms in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for appreciating our contributions and for raising the score! **Weak Additivity with other activations** Thank you for the suggestion. We did some preliminary experiments on different activation functions. Specifically, we conduct experiments on MLPs on MNIST with different activation functions, including ReLU, Sigmoid, and Tanh. For each model with different activation functions, we apply the spawning method to obtain two modes, $\boldsymbol{\theta_A}$ and $\boldsymbol{\theta_B}$, spawning at the same epoch. Over the test set $\mathcal{D}$, we measure the error barrier (defined in Frankle et al. [8]) between $\boldsymbol{\theta_A}$ and $\boldsymbol{\theta_B}$, and measure the LLFC and the weak additivity condition. Here, error barrier is denoted as $\text{Err}\_{Barrier}$, where $\text{Err}\_{Barrier} = \max\_{\alpha \in [0, 1]} \text{Err}\_{\mathcal{D}}(\alpha \boldsymbol{\theta}_A + (1-\alpha) \boldsymbol{\theta}_B) - \frac{1}{2}(\text{Err}\_{\mathcal{D}}(\boldsymbol{\theta}_A) + \text{Err}\_{\mathcal{D}}(\boldsymbol{\theta}_B))$. In the table below, for the cases of MLP(ReLU) and MLP(Sigmoid), the values of both $\text{Err}\_{Barrier}$ and $\mathbb{E}\_{\mathcal{D}, \ell \in [L]}[1-\text{cosine}^{(\ell)}\_{0.5}(\boldsymbol{x}_i)]$ are close to zero; for the case of MLP(Tanh), those values are not negligible. Correspondingly, the values of $\mathbb{E}\_{\mathcal{D}, \ell \in [L]}[\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}^{(\ell)},\tilde{\boldsymbol{h}}\_{i, B}^{(\ell)})]$ for MLP(ReLU) and MLP(Sigmoid) are relatively small compared to MLP(Tanh). This observation demonstrates a correlation between the onset of LMC/LLFC and the weak additivity condition, thus suggesting that weak additivity condition is important for LMC/LLFC. Also, we did not find any instance that LMC holds but LLFC doesn't. | | $\text{Err}\_{Barrier}$ (%) | $\mathbb{E}\_{\mathcal{D}, \ell \in [L]}[1-\text{cosine}^{(\ell)}_{0.5}(\boldsymbol{x}_i)]$ | $\mathbb{E}\_{\mathcal{D}, \ell \in [L]}[\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}^{(\ell)},\tilde{\boldsymbol{h}}\_{i, B}^{(\ell)})]$ | | ------------ | -------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | MLP(ReLU) | 0.115 | 0.0385 | 0.1525 | | MLP(Sigmoid) | 0.060 | 0.0203 | 0.0799 | | MLP(Tanh) | **2.910** | **0.1157** | **0.4058** | **Minimizing QAP:…ask how much of an advantage QAP has only solving the LAP for activations or weights…** Thank you for the very interesting comments. We agree that further studying the advantages of solving the QAP over LAP is an important future direction and will make sure to discuss these points in the paper.
Summary: This paper introduces Layerwise Linear Feature Connectivity (LLFC). Compared to the better known linear mode connectivity (LMC), which states that networks trained by SGD are linearly connected modulo permutation, LLFC suggests that the feature maps of every layer is connected. As shown in the paper, LLFC is a strictly stronger property than LMC. Empirical results on a range of architectures (ResNet-20, VGG-16, MLP) and datasets (MNIST, CIFAR-10) suggest that LLFC co-occurs with LMC. The authors then identify two conditions that collectively imply LLFC: weak additivity, which requires ReLU to behave like a linear activation on two modes, and commutativity, which requires the next-layer linear transformations applied to the internal features of two networks can be interchanged. They verify that these two properties holds for modes that satisfy LLFC empirically. Finally, the authors show that two common methods to obtain linearly connected modes, the spawning method and the permutation method, both promote the commutativity property, which explains their effectiveness. Strengths: - The paper reveals a novel and more general notion of linear mode connectivity, which is an interesting phenomenon that attracts recent attention in the ML community. The authors provide precise definitions and a set of sufficient conditions for LLFC. Their observation is novel and advances the understanding of the origin of linear connectivity. - Experiments are sound and provide useful insights. In particular, empirical results support the occurrence of LLFC and validates that the two conditions for LLFC approximately hold in common settings. - Through dissecting the conditions required for LLFC, this paper also explains why the spawning method and permutation method produces LMC. Since theoretical results are sparse in explaining the origin of LMC, this paper’s contribution on the topic is significant. - The writing is clear and well organized. Weaknesses: - The analysis for the cause of LLFC is limited to ReLU activation. Extending the weak additivity condition to different activations does not seem straightforward. To demonstrate the prevalence of LLFC, it might help to include experiments on neural networks with different activations. - As pointed out by the authors and shown in figure 4 and 6, weak additivity for ReLU activations and commutativity only approximately hold in real neural networks. Hence Theorem 1 describes conditions for perfect LLFC in an idealistic setting. While this result is significant and novel, a more careful treatment of the approximated version could make the theoretical contribution stronger. - There are some gaps between theory and empirical observation. Specifically, the following results are not well explained: (a) Why are the pre-activations sparse? Is this universal to all architectures? (b) Why do the modes obtained by the spawning method share similar principal directions of model weights in each layer? While these are empirically verified in the paper, it would be helpful to point the readers to existing theoretical analysis, if available. - While LLFC is an interesting observation and provides insights on LMC, there seems to be few applications that leverages LLFC. The idea of averaging features mentioned in the conclusion is interesting, but it is not clear how to implement feature averaging and under what situations this would be beneficial. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the definition of LLFC, there is a scaling factor $c$ that is not predicted by theorem 1. According to the authors, this inconsistency can be attributed to the accumulation of errors in the two conditions (line 263). Why does the accumulation of errors result in a scaling difference instead of another type of modification such as an additive term? - Does spawning or permutation have any impact on the sparsity of pre-activation $\tilde{H}^{(l)}$ and the weak additivity condition? - Is condition 1+2 the only way to guarantee LLFC? In particular, identifying other sets of sufficient conditions for LLFC may lead to new permutation methods that find linearly connected modes, other than weight matching or activation matching. - The experiments use SGD with momentum in training. Does the choice of optimization method affect the occurrence of LLFC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. There are no potential negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: “The analysis for the cause of LLFC is limited to ReLU activation…To demonstrate the prevalence of LLFC, it might help to include experiments on neural networks with different activations.”** **A1**: We note that ReLU is the activation used in standard architectures including ResNet, VGG, etc., and is used in all previous work on LMC. An analysis based on ReLU is already highly informative given its wide adoption. In fact, we did some preliminary experiments which showed that LMC is difficult to achieve for some other activation functions. We leave the exploration beyond the ReLU activation as a future direction. **Q2: “While this result is significant and novel…the approximated version could make the theoretical contribution stronger.”** **A2**: Thank you for appreciating the significance and novelty of Thm 1. We agree that an approximate version could make the theoretical contribution stronger. We currently adopt the exact formulation for ease of presentation, as we believe the concise formulation already conveys the main ideas and is easier to understand and build upon. We remain open to add a careful approximate version of the theorem to the paper. Please kindly refer to our global response for more discussion on the limitation of our work. **Q3: “There are some gaps between theory and empirical observation…(a) Why are the pre-activations sparse?…(b) Why do the modes obtained by the spawning method share similar principal directions of model weights in each layer?…it would be helpful to point the readers to existing theoretical analysis, if available”** **A3**: We agree that (a) and (b) are intriguing phenomena, and theoretical explanations would be valuable. However, we are not aware of any existing theoretical analysis that's directly relevant. There are some empirical studies that bear some degree of relevance, e.g., [cite 2] empirically investigate the sparsity of the activations; [cite 1] studies the evolving statistics of model weights during the early phase of training. We agree that theoretical studies of these questions are important future directions. [cite 1] Jonathan Frankle, David J. Schwab, and Ari S. Morcos. The early phase of neural network training. In International Conference on Learning Representations, 2020. [cite 2] Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res., 22(1), jan 2021. ISSN 1532-4435 **Q4: “…there seems to be few applications that leverages LLFC…”** **A4**: The goal of this paper is to unveil nontrivial phenomena that offer elucidating insights into the fundamental mechanisms of deep learning. Akin to how LMC served as inspiration for applications like model soup [31], LLFC could potentially inspire applications such as feature averaging. We leave this prospect as a future direction, opting not to explore it within the scope of a single paper. **Q5: “In the definition of LLFC, there is a scaling factor $c$ that is not predicted by theorem 1…Why does the accumulation of errors result in a scaling difference instead of another type of modification such as an additive term?”** **A5**: Thank you for the great question. First of all, in most cases, we empirically observe that $c$ is close to 1 (See Appendix D.2), as predicted by Theorem 1. In other cases, we find that employing a scaling factor enables a much better description of the practical behavior than an additive error term. We will include this discussion in the paper. **Q6: “Does spawning or permutation have any impact on the sparsity of pre-activation $\tilde{\boldsymbol{H}}^{(\ell)}$ and the weak additivity condition?”** **A6**: Both the spawning and permutation methods do affect the weak additivity condition. In Figure 3 (global response), we show that the values of $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B})$ (with spawning or permutation) are negligible compared to $\text{Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ (independently trained networks). Please kindly refer to the global response for more details. On the other hand, pre-activation sparsity is not affected because it is only a property of a single network. **Q7: “Is condition 1+2 the only way to guarantee LLFC? In particular, identifying other sets of sufficient conditions for LLFC may lead to new permutation methods…other than weight matching or activation matching.”** **A7**: A very interesting question. Based on our theoretical and empirical results, we believe that LLFC is strongly related to Conditions 1+2. On the other hand, while Conditions 1+2 are very insightful for our understanding of LLFC and LMC, we agree that if there exist other sufficient conditions, they might be helpful for designing new permutation methods. **Q8: “The experiments use SGD with momentum in training. Does the choice of optimization method affect the occurrence of LLFC?”** **A8**: Thank you for your question. Indeed, we have carried out experiments employing the Adam optimizer for MLPs trained on the MNIST dataset. Consequently, we believe that the choice of optimization methods does not affect the occurrence of LLFC. To further verify, we conduct additional experiments entailing the training of ResNet-20 on the CIFAR-10 dataset with Adam optimizer, as shown in Figure 1 (global response). We kindly direct you to the global response section for more experimental details. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. I have read the rebuttal and other reviews and will maintain my score.
Summary: In this work, the property of Layerwise Linear Feature Connectivity (LLFC) of neural network representations is introduced, which is a stronger generalization of linear mode connectivity (LMC). They show that LLFC often occurs when LMC does. Moreover, they give a possible mechanism by which LLFC may occur (ReLU activation additivity and weight commutatitivty), and provide evidence that LLFC often does occur by this mechanism. Finally, they reinterpret spawning and permutation-finding based methods for LMC as promoting commutativity. Strengths: 1. Introduction exposition is quite good. 2. The relationship between spawning / permutation-finding methods and commutatitivity is really interesting and insightful. The fact that both the weight-matching and activation-matching losses can be found in the commutativity condition is quite neat, as (to the best of my knowledge) these two methods were proposed as heuristics. The discussion of weight matrix rank in Appendix E is also insightful. 3. Figure 5 on the empirical measurement of commutativity is very nice, especially because the two baseline numbers (weight distance and activation distance) are so large, so commutatitivity really seems to be a good thing to look at here. 4. Although LLFC is stronger than LMC, in some sense it may make the study of LMC easier, because you have more to look at (features in each layer and the two sufficient conditions you give, not just loss of a whole neural net). Weaknesses: 1. You don't really cover the last method (straight-through estimator) from Git Re-Basin, even though it works the best in many cases. Perhaps you should note this. 2. Although these experimental setups are standard, it would be good to see to what extent this holds beyond these few architectures for image classification. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Typos: 1. In 5.2, (i) should have $W_{B, \mathrm{pri}}^{(l)} H^{(l-1)}_A + W_{A, \mathrm{pri}}^{(l)} H_B^{(l-1)}$. 2. 5.3: "conprehensive" -> "comprehensive" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I do not see much discussion of limitations, besides on the hardness of exactly minimizing the commutativity property objective. Perhaps you could state that the empirical evidence is limited to image classification. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Ask for additional experiments to cover the straight-through estimator method from Git Re-basin [1]. “You don't really cover the last method (straight-through estimator) from Git Re-Basin, even though it works the best in many cases. Perhaps you should note this.”** **A1**: We greatly appreciate your suggestions. We follow your suggestions and conduct new experiments to cover the Straight-Through Estimator (STE) method. Our experiments follow the same settings as in Ainsworth et al. [1], which applied STE method on MLPs trained on MNIST and CIFAR-10 datasets. Correspondingly, in congruence with the experiments in Section 4 of the main paper, we measure both ${\rm cosine}\_{\alpha}(\boldsymbol{x}_i)$ and ${\rm cosine}\_{A, B}(\boldsymbol{x}_i)$ for each data point $\boldsymbol{x}_i$ in the test set $\mathcal{D}$. In Figure 1 (global response), the values of $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{\alpha}(\boldsymbol{x}_i)]$ are close to zero compared to $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{A,B}(\boldsymbol{x}_i)]$ across different layers, datasets, and different values of $\alpha$, which further convincingly verifies that LLFC consistently co-occurs with LMC. Please kindly refer to the global response for more experimental details. **Q2: Suggestions for empirical evidence beyond image classification. “Although these experimental setups are standard, it would be good to see to what extent this holds beyond these few architectures for image classification.”** **A2**: Thank you for your suggestions. We acknowledge that our experiments follow the standard setups in the LMC literature. We will leave the exploration beyond image classification as future directions. We have included this in the list of limitations (see global response). **Q3: Typos.** **A3**: Thank you for spotting the typos! We will fix them. **Q4: Ask for more discussion about the limitation of this paper. “I do not see much discussion of limitations, besides on the hardness of exactly minimizing the commutativity property objective. Perhaps you could state that the empirical evidence is limited to image classification.”** **A4**: Thank you for the suggestion. Please refer to the global response for a list of limitations, which we will add to the paper. --- Rebuttal Comment 1.1: Comment: Hello, Thank you for your reply. It is great that you include STE experiments and limitations now! I would like to retain my score.
Rebuttal 1: Rebuttal: **Limitations.** 1. In Appendix C, identifying a permutation that directly enforces commutativity condition involves solving a NP-hard QAP. We leave the QAP-solving problem as a future direction. 2. Our Theorem 1 predicts LLFC in an ideal case, while in practice, a scaling factor $c$ is introduced to the definition of LLFC to better describe the experimental results. Realistic theorems and definitions (approximated version) are deferred to future research. 3. Our current experiments mainly focus on image classification, aligning with existing literature on LMC. While we appreciate suggestions to extend empirical evidence beyond image classification, we commit to exploring this avenue in future research. **Experimental details.** **[Figure 1] Further verify LLFC under various settings.** To verify the LLFC property, we measure ${\rm cosine}\_{\alpha}(\boldsymbol{x}_i)$ and compare with ${\rm cosine}\_{A, B}(\boldsymbol{x}_i)$, consistently with main paper. In Fig 1, the values of $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{\alpha}(\boldsymbol{x}_i)]$ are close to 0 compared with $\mathbb{E}\_{\mathcal{D}}[1-\text{cosine}\_{A, B}(\boldsymbol{x}_i)]$, and thus convincingly verify our claim. Notably, all the experimental settings are standard: the settings of MLPs on the MNIST and CIFAR-10 follows Ainsworth et al. 2022 [1]; the training of ResNet-20 on the CIFAR-10 follows the default settings of Pytorch; the training of ResNet-50 on the Tiny ImageNet follows Frankle et al. [8]. **[Figure 2] Add baseline of non-LMC models for verifying LLFC.** We measure ${\rm cosine}_{0.5}(\boldsymbol{x}_i)$ on both the linearly connected models and independently trained models, denoted as ${\rm cosine}\_{LMC}(\boldsymbol{x}_i)$ and ${\rm cosine}\_{not \ LMC}(\boldsymbol{x}_i)$ correspondingly. In Fig 2, the values of $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{LMC}(\boldsymbol{x}_i)]$ are negligible compared to $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{not \ LMC}(\boldsymbol{x}_i)]$. Therefore, we rule out the possibility that LLFC is a byproduct of averaging. **[Figure 3] Add baselines for verifying weak additivity and sparsity.** First, to verify the weak additivity , we compared ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B})$ with ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ and ${\rm Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$. The subscripts $A, B, C, D$ denote four different models, where $A, B$ are linearly connected and $C, D$ are independently trained. In Fig 3, the values of ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, A}, \tilde{\boldsymbol{h}}\_{i, B})$ are negligible compared with ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ and ${\rm Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$. Therefore, we verify that the weak additivity condition holds for modes that are linearly connected. Second, to verify the sparsity claim, we measure the sparsity of the pre-activations of both well-trained networks and random initialized networks, using $S(\boldsymbol{x}) = \frac{\|\boldsymbol{x}\|_1}{n\|\boldsymbol{x}\|\_{\infty}}(\boldsymbol{x} \in \mathbb{R}^n)$, denoted as $S(\tilde{\boldsymbol{h}}\_{i, \text{end}})$ and $S(\tilde{\boldsymbol{h}}\_{i, \text{init}})$ respectively. In Fig 3, the majority of the values of $S(\tilde{\boldsymbol{h}}\_{i, \text{end}})$ are distinctively smaller than $S(\tilde{\boldsymbol{h}}\_{i, \text{init}})$. Therefore, we validate our sparsity claim. Notably, in Fig 3, the values of ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ are smaller than ${\rm Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$. The gap between between ${\rm Dist}\_{\sigma}(\tilde{\boldsymbol{h}}\_{i, C}, \tilde{\boldsymbol{h}}\_{i, D})$ and ${\rm Dist}\_{\sigma}(\boldsymbol{r}_1, \boldsymbol{r}_2)$ increases as the gap between the gap between $S(\tilde{\boldsymbol{h}}\_{i, \text{end}})$ and $S(\tilde{\boldsymbol{h}}\_{i, \text{init}})$ widens. This observation supports that the weak additivity is likely to hold if the pre-activations are sparse enough. **[Figure 4] Add baselines of non-LMC models for verifying commutativity.** We measure ${\rm Dist}\_{com}$ on both the linearly connected models and independently trained models, denoted as ${\rm Dist}\_{com, LMC}$ and ${\rm Dist}\_{com, not\ LMC}$. In Fig 4, the values of ${\rm Dist}\_{com, LMC}$ are negligible compared with ${\rm Dist}\_{com, not\ LMC}$, which validates the commutativity condition for LMC models. **[Figure 5] Verify the top singular vectors of $V$ hold a small principal angle similar to $U$.** We extract the top $k$ singular vectors from $V_A$ and $V_B$ and compute the minimal principal angle $\beta_{V}$ between the subspaces spanned by these singular vectors. We compare the principal angle of two linearly connected models, $\beta_{LMC, V}$, with that of two independently trained models, $\beta_{not \ LMC, V}$. In Fig 5, the angle $1-\cos\beta_{LMC, V}$ is close to 0 while $1-\cos\beta_{not\ LMC, V}$ is significantly large, thus confirming our claim. **[Figure 6] Relationship between shared early training epochs and the similarity between principal directions of spawned models.** We compute both $1-\cos\beta_{U}$ and $1-\cos\beta_{V}$ for two models that spawning at different iteration $t$. In Fig 6, across different layers, both $1-\cos\beta_{U}$ and $1-\cos\beta_{V}$ decrease as the spawning iteration increases. This implies that similarity between principal directions of spawned models increases if sharing more early training iterations. Furthermore, a noticeable similarity emerges between the curves of $1-\cos\beta$ v.s. spawning iteration $t$ and the curves of the instability (defined in Frankle et al. [8]) v.s. the spawning iteration $t$ (Fig 3 in Frankle et al. [8]). This enhances the credibility of our analysis significantly. Pdf: /pdf/2cc7752ea2164c3d1e24f38c2df966ca80f70cdf.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a special case of Linear Mode Connectivity (LMC) denoted as Layerwise Linear Feature Connectivity (LLFC). Whereas two trained neural networks present LMC if a convex combination of their parameters produce a neural network with similar training loss and accuracy, two trained neural networks present LLFC if a convex combination of their features in every layer also produces a neural network with similar features. Although being a special case, the authors find out that LMC and LLFC co-occur very often. Moreover, the authors characterize conditions that jointly imply LLFC between two ReLU networks. ***** Following the authors' response, I am updating my score from 6 to 7. Strengths: The writing of the paper is very clear and the experiments are distributed in such a way along the text that it is easy to follow along. Moreover, I find the study of how this conditions may emerge in ReLU networks particularly relevant, since in that simpler setting it is easier to understand what is going on. Weaknesses: The authors sell the idea of LLFC in a very positive tone, which is actually quite common in papers, but I cannot help but wonder about the following: Two neural networks presenting LLFC would have very similar parameters, and the fact that LLFC co-occurs with LMC implies that LMC is observed due to the similarity between trained neural networks - either because the first epochs of training determined what model would be ultimately obtained; or because there are relatively few optimal neural networks upon permutation. However, even under this interpretation, I believe that this study helps demystifying LMC if it turns out that LMC rarely occur without LLFC. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The introduction attributes the first observation of LMC to [8], but Section 2 attributes it to earlier work [21]. Definition 2 needs rework because it is clearly invalid if $\alpha=0$ or $\alpha=1$ unless $c=1$. In fact, this is not even how you measure it in Section 4, since you use cosine similarity instead. I believe that the use of a tolerance factor $\epsilon$ like you did in Line 220 would be more adequate in this definition. Would it make sense to assume that Definition 3 has some connection with stable neurons in ReLU networks, as in the following papers: https://arxiv.org/abs/1711.07356 & https://arxiv.org/abs/2102.07804 ? It seems to me that Condition 1 was inspired by spawning and Condition 2 by permutation, even though they are not worded in that way. Is that a valid way to interpret them? I would like to hear the thoughts of the authors about what I described in Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I could not identify a discussion about limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Concerns about the trivial case that two Neural Networks (NNs) emerging LLFC share the similar weights. “ I cannot help but wonder about the following: Two neural networks presenting LLFC would have very similar parameters, and the fact that LLFC co-occurs with LMC implies that LMC is observed due to the similarity between trained neural networks”** **A1**: Thank you for your great question. In fact, our experiments have already ruled out the trivial case that two NNs share similar weights. First, we directly evaluated the difference between the weights of two NNs that yield LLFC, namely $\text{Dist}_W = \text{dist}\left(\text{vec}({\boldsymbol{W}}_A^{(\ell)}), \text{vec}({\boldsymbol{W}}_B^{(\ell)})\right)$, where $\text{dist}(\boldsymbol{x}, \boldsymbol{y}) := \|\boldsymbol{x} - \boldsymbol{y}\|^2 / (\| \boldsymbol{x}\| \cdot \|\boldsymbol{y}\|)$. In Figure 5 (main paper), the values of $\text{Dist}\_W$ are usually within 0.5~1.5 while the values of $\text{Dist}\_{com}$ are close to zero. Second, we calculated the cosine similarity between the features of two linearly connected NNs, namely, ${\rm cosine}\_{A, B}(\boldsymbol{x}_i)$. If the weights of two NNs are similar, their features should display similarity as well. Nevertheless, in Figure 2 and 3 (main paper), the value of $\mathbb{E}\_{\mathcal{D}}[1-{\rm cosine}\_{A, B}(\boldsymbol{x}_i)]$ could reach its maximum at around 0.75. Consequently, we can confidently dismiss the trivial case that two NNs emerging LLFC share similar weights. **Q2: Whether LMC rarely occur without LLFC. “However, even under this interpretation, I believe that this study helps demystifying LMC if it turns out that LMC rarely occur without LLFC.”** **A2**: Thank you for acknowledging our contribution. We would like to clarify that we did not find any instances where two NNs exhibit LMC but not LLFC. We conducted extensive experiments across diverse datasets, network architectures, and various layers within the network (Sec 4 & Appendix D.2). Therefore, we believe LLFC is a more fundamental phenomenon that helps demystify LMC. **Q3: A reference problem. “The introduction attributes the first observation of LMC to [8], but Section 2 attributes it to earlier work [21].”** **A3**: Thank you for pointing out this problem. In fact, [8] references [21] and attributes the initial observation to [21]. However, [8] was the first to formally define and thoroughly investigate the LMC problem. Thank you for your careful review, we will later clarify this point in the revised version of our paper. **Q4: Concerns about Definition 2. “Definition 2 needs rework because it is clearly invalid if $\alpha =0$ or $\alpha =1$ unless $c=1$.”** **A4**: Thank you for your careful review again. In Definition 2, $c$ is allowed to depend on the interpolation parameter $\alpha$, i.e., "$\forall \alpha \in [0, 1], \exists c > 0$". When $\alpha = 0$ or $\alpha = 1$, $c=1$ will satisfy the condition. Consequently, the definition remains valid even at the boundary cases of $\alpha = 0$ or $\alpha = 1$. For other values of $\alpha$, $c$ can be different from $1$. Thus, using cosine similarity to verify LLFC (Definition 2) is appropriate. **Q5: Where Definition 3 has some connection with stable neurons in ReLU networks. “Would it make sense to assume that Definition 3 has some connection with stable neurons in ReLU networks…?”** **A5**: Thank you for your question. Regarding the two papers you referenced, stable neuron is defined as one whose output is the constant value zero ($y=0$) or the pre-activation output ($y=x$) on all inputs, which is a property concerning a single network. On the other hand, Definition 3 concerns a relation between two networks. Therefore, these two properties are orthogonal to each other, and their connection is not clear. That said, both properties describe interesting linearity phenomena in ReLU networks, and we will make a note of this in the paper. **Q6: Question about the inspiration and interpretation of Condition 1 and 2. “It seems to me that Condition 1 was inspired by spawning and Condition 2 by permutation, even though they are not worded in that way. Is that a valid way to interpret them?”** **A6**: We are happy to explain what inspired Conditions 1&2. In fact, they were not inspired from spawning or permutation methods but stemmed from the derivation process of Theorem 1. Through this, we identified that the two conditions facilitate the derivation of the feature connectivity across layers. Subsequently, we discovered the connection between Condition 2 and both spawning and permutation methods. This intriguing connection leads us to the conjecture that both spawning method and permutation method essentially contribute to the fulfillment of LLFC. **Q8: Ask for more discussion about the limitations of this paper. “I could not identify a discussion about limitations in the paper.”** **A8**: We are happy to discuss the limitations of this paper. Please kindly refer to the global response for a list of limitations, which we will add to the paper. --- Rebuttal Comment 1.1: Comment: I appreciate and am satisfied with the responses from the authors. I am updating my score as a reflection of that.
null
null
null
null
null
null
Physics-informed generative model for drug-like molecule conformers
Reject
Summary: This paper proposes PIDM, a novel generative model for generating 3D molecular conformations from 2D molecular graphs. The proposed method uses an ODE based diffusion model to generate 3D molecular conformations, and designs a denoising network with multiple different modules to capture various of physical information from molecules. The proposed method achieves better performance than two baseline methods, GeoMol and GeoDiff, in experiments. Strengths: - This work proposes a series of novel strategy to capture geometric information from different geometric inputs (e.g., bonds, bends, torsions) by different modules in a model. The proposed model architecture can be useful for a variety of molecular machine learning tasks. - Experimental results show that the proposed method achieves promising performance when compared with two strong baseline methods, GeoMol and GeoDiff, in benchmark datasets. Weaknesses: Major: - A major weakness of this paper lies in that many necessary details of the proposed method is not presented or not sufficiently described as detailed below. (1) The proper torsions and improper torsions introduced in line 40-41 are not formally defined and it is not easy to discriminate them by Figure 1. Also, it is not clear why the model has a module for proper torsions but no module for improper torsions. (2) It is not easy to understand why "The constraint on proper torsions is cyclic in the angle $\phi$ such that energy is commonly parameterized as a function of $n\phi-\phi_0$" (line 46). Does it mean that proper torsions are periodic and $\phi$ is the periodicity? How this constraint is incorporated in the designed model? (3) The details of several modules are not described in Section 2. The details of the computation process in graph attention module is not described. The cis-trans module in Figure 2 is not introduced. Also, it is not clear what is the output ("solution" in Figure 2) of the model, is it the atom embedding, or the denoised atom coordinate? How the outputs are computed if they are not atom embeddings? - For experiments, it would make results stronger if authors can do comprehensive ablation studies about the proposed novel modules in the model as they are the major novelty contributions. Also, it is recommended to compare with an important baseline method, Torsional Diffusion [1]. Minor: - To make a better organization, it would be better to move Section 3 Datasets to the place just before Section 6 Experiments as they are more closely related. - It is hard to understand what Figure 4 tries to show. What do the y-axis values mean? And how these curves are related to explainability? Authors may make more clarification, and are recommended to move these results to experiment part to maintain a better organization. [1] Torsional Diffusion for Molecular Conformer Generation. NeurIPS 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: No additional questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our submission and providing valuable feedback. Since you have provided no explicit questions, we will attempt to address your list of weaknesses. * The proper torsions and improper torsions introduced in line 40-41 are not formally defined and it is not easy to discriminate them by Figure 1 The definitions of these terms are standard and available from the references [19-21]. We apologize if the figures are not clear. Explicit equations are provided in the supplemental materials. * Also, it is not clear why the model has a module for proper torsions but no module for improper torsions We explain this in the supplemental materials: > “No generic improper torsion component is provided because improper torsions are already constrained by bond lengths and bond angles. Force field parameterizations typically include a select set of improper torsions primarily as a means to enforce planarity in conjugated systems. This is needed to characterize forces which are calculated as first order derivatives of energy with respect to atom position. Such functionality is not needed here.” * Does it mean that proper torsions are periodic and ɸ is the periodicity? We can appreciate the confusion. This sentence should probably have been stated as the following: “The constraint on proper torsions is cyclic in the angle ɸ such that energy is commonly parameterized as a trigonometric function of nɸ + ɸ0” * How this constraint is incorporated in the designed model? By using cosɸ and sinɸ as inputs to the MLP, we ensure a periodic function. * The details of several modules are not described in Section 2 All properties of these modules are explained in sufficient detail to reproduce their general form and understand their construction. Equations are provided in the supplementary materials. The details can be cross-referenced with the implementation, which is provided. * The details of the computation process in graph attention module is not described. The GATv2 model is a standard technique and a suitable reference is provided. * The cis-trans module in Figure 2 is not introduced. Incorrect. It is referred to by name in lines 113-115. * Also, it is not clear what is the output ("solution" in Figure 2) of the model, is it the atom embedding, or the denoised atom coordinate? Line 82 states: “...representing our model as a denoising function D that provides an estimate of the true coordinates x.” This appears to be fairly clear. See also Equation (1). In addition, in line 86-87 it is stated “and a series of bonded subcomponents whose outputs are summed together for coordinate prediction.” * For experiments, it would make results stronger if authors can do comprehensive ablation studies about the proposed novel modules in the model as they are the major novelty contributions. Each bonded component has an explicit and important purpose, and so, removing any of them would degrade the model. We haven’t performed the experiment to explicitly measure the level of degradation. Early in development, the components were added and tested sequentially. * Also, it is recommended to compare with an important baseline method, Torsional Diffusion [1] Please see our global remarks concerning the paper [1]. In short, we do not consider this work a valid baseline. * To make a better organization, it would be better to move Section 3 Datasets to the place just before Section 6 Experiments as they are more closely related. It is somewhat awkward to talk about the results from training when you haven’t discussed the data set used for training. * It is hard to understand what Figure 4 tries to show. What do the y-axis values mean? And how these curves are related to explainability? Each bonded component is intended to predict a correction to atom positions based on the corresponding physical term (bond length, angle, torsion). To explain how the model determines the total correction to each atom position, we may inspect the output of each bonded component individually. Since each component corresponds to a specific type of physical term, we can understand the output of a component with respect to that term. Each bonded component uses a MLP to predict a movement distance along a vector. For the bend component, this vector is between the two outer atoms of the bend. The input to the MLP includes the distance between the two outer atoms (the x axis of Figure 4) and the relevant value of the noise schedule (the value of σ, used to distinguish between the various curves presented). The input to the MLP also includes the embedding from the atoms of the molecular structure associated with the bend, implied by the image of the molecule on the left. The vertical line corresponds to the ground truth value established by an independent invocation of the semi-empirical quantum mechanical simulation GFN2-xTB. Notice that, at the limit of no noise (σ = 0), if the distance between atoms matches the ground truth, this component suggests that no correction is needed. If the distance is too small or too large, a corresponding correction in approximate proportion to the error is applied to bring it to the ground truth. At larger noise values (σ > 0), the curve departs from this behavior, which can be understood as the overcorrection needed for efficient generation at larger noise values. Figure 4 is provided for one example structure for the bend component. One can consider any interesting molecule structure, with any challenging chemistry (limited only by the imagination), and repeat the process for any component. Or, if a particular molecule performs poorly during generation, one can study that specific molecule, and query specific terms to search for defects. * and are recommended to move these results to experiment part to maintain a better organization Figure 4 pertains to the model and has no direct relationship to the generation process, nor to the experimental results. --- Rebuttal Comment 1.1: Title: Follow-up Responses Comment: I appreciate authors' hard work in clarifying their method contributions and addressing my concerns in the rebuttal. - I appreciate authors' clarification and explanation about method details and Figure 4. - For paper organization, I think it is not awkward to move dataset introduction to places after method design. Unless your training method is specially designed for some certain datasets, it is more logically smooth to tell people first what are your method and novelty contributions, then demonstrate the effectiveness of your contributions by experiments on your used datasets (here you may introduce your datasets). It is a common organization style of papers in machine learning community though may not be the habit in other research areas. - I appreciate authors' clarification in the major target of the proposed method in their rebuttal to all reviewers. To my understanding, the major problem to be solved by PIDM is to generating a high-quality initial molecular conformations by creating accurate static terms (bond lengths & angles), and experiments show that PIDM can achieve it in experiments (Table 1). As for variable terms (proper torsions), PIDM does not consider improve their generation accuracy as they may change in different scenarios. I agree that it is not suitable to compare PIDM with Torsional Diffusion (TD). However, I am not persuaded that generating high-quality initial molecular conformations is a really important task in practical applications, as RDKit can also achieve it and produce a good enough initial conformation, this is why TD starts from RDKit produced conformation and refine proper torsions. In my opinion, authors should conduct many more experiments to motivate the importance of generating accurate static terms in producing accurate initial molecular conformations. For instance, in molecular conformation generation problem, authors may show PIDM + TD is much better than original TD (i.e., RDKit + TD); in the problem of generating ligand conformations conditioned on the given ligand and protein pocket [1], authors may show that replacing the commonly used RDKit initialized molecular conformations by PIDM initialized ones leads to much better performance. I suggest to more clearly clarify the central target of PIDM (i.e., focusing on static terms generation) and differentiate PIDM from other molecular conformation generation methods in the revision of paper. Also, more experiments should be added to demonstrate the importance of the studied problem. As these are not easily completed in a short periodic of author-reviewer discussion, I tend to keep my decision of rejection, but encourage authors to make significant reorganization in this paper following my suggestions and resubmit it to a new venue. [1] EQUIBIND: Geometric Deep Learning for Drug Binding Structure Prediction. ICML 2022.
Summary: The paper presents a diffusion model to generate conformers of drug-like molecules. The score model architecture is novel. Strengths: The authors construct a diffusion model for conformer generation whose score model architecture is inspired by the structure of classical force fields. This is an interesting approach and quite different from the SE(3) or E(3) equivariant graph neural nets that are popular for this kind of problem. Weaknesses: I found parts of the paper to be quite unclear. Line 46: I don’t understand this $n\phi - \phi_0$ expression or ‘freedom to select one of $n$ torsion angles’. Is there a discrete set of admissible torsion angles? Sorry, I’m ignorant about molecular geometry – but so will be many of your readers at NeurIPS. Line 71 ‘attempts to model nonbonded distances which requires it to sample torsional space during training… torsional space is physically ambiguous’: I don’t understand these sentences. Figure 2: what shape is the output from each of the pink blocks? Is it (number of atoms) * 3? What is the regression target for the summed ‘solution’? In some works on molecule generation or conformation prediction, the regression target is the noise that was added to the atom positions, and in other works the regression target is the ‘clean’ atom positions from the training data, so this needs to be clarified. Line 125: what is ‘conformer inconsistency at the graph level’? Section 4 Training: if you want to experiment with different samplers after training then I think that during training you should sample values of $\sigma$ from a continuous range, rather than picking from N discrete values. If you sample by stepping through values of $\sigma$ not seen during training, you will be presenting the score model with inputs it never saw during training, and I expect the performance will be suboptimal. Figure 4: please explain what the x and y axes are here. In general, I did not understand the ‘explainability’ aspect of the work. Line 216: I think that the inability of GeoDiff to discriminate between a molecule and its mirror image is a fundamental issue with the GFN architecture. Table 1 caption says 'best values in each category are highlighted' but I cannot see the highlight (perhaps a printing problem). The guided generation task in section 7 seems arbitrary. I do not understand equation (12) because in the right-hand-side numerator we have a vector $y$ and then subtract $F(y)$ which appears to be a scalar. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Line 95 ‘uses bonds as graph edges’: why not allow some message-passing between non-bonded atoms? Line 127: is any clustering used to split the data, or can similar molecules appear in both training and test data? Could you please elaborate on the explainability of the model? For example, show examples of predictions and their explanations. Perhaps this is meant to be shown in Figure 4 but I did not understand it. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read our submission and provide feedback. Let us begin with your questions. * Line 95 ‘uses bonds as graph edges’: why not allow some message-passing between non-bonded atoms? In chemistry, the classification of chemical groups is associated with bonded connections alone. Thus, we felt there was no need to incorporate message passing for nonbonded atoms. * Line 127: is any clustering used to split the data, or can similar molecules appear in both training and test data? See lines 135-141. Clustering of molecule structures is an interesting problem with no general solution (mapping chemical space to RN is currently an unsolved problem). Instead, we employ simple random selection at the level of molecule structures in order to perform a data split between training, validation, and test subsets. From experience, we understand that molecular structure data sets tend to have families of closely related compounds. This means that there is the distinct danger that molecules in one part of a random split could be closely associated with molecules in another. To explicitly avoid this issue, we employed a standard structure-matching technique (circular fingerprints and the Tanimoto metric) to discard any molecule in the test data set with a structure that was even remotely similar to any molecule structure in the training or validation sets, or, for that matter, to any molecule in the entire GEOM-drugs data set. * Could you please elaborate on the explainability of the model? For example, show examples of predictions and their explanations. Perhaps this is meant to be shown in Figure 4 but I did not understand it. Each bonded component is intended to predict a correction to atom positions based on the corresponding physical term (bond length, angle, torsion). To explain how the model determines the total correction to each atom position, we may inspect the output of each bonded component individually. Since each component corresponds to a specific type of physical term, we can understand the output of a component with respect to that term. Each bonded component uses a MLP to predict a movement distance along a vector. For the bend component, this vector is between the two outer atoms of the bend. The input to the MLP includes the distance between the two outer atoms (the x axis of Figure 4) and the relevant value of the noise schedule (the value of σ, used to distinguish between the various curves presented). The input to the MLP also includes the embedding from the atoms of the molecular structure associated with the bend, implied by the image of the molecule on the left. The vertical line corresponds to the ground truth value established by an independent invocation of the semi-empirical quantum mechanical simulation GFN2-xTB. Notice that, at the limit of no noise (σ = 0), if the distance between atoms matches the ground truth, this component suggests that no correction is needed. If the distance is too small or too large, a corresponding correction in approximate proportion to the error is applied to bring it to the ground truth. At larger noise values (σ > 0), the curve departs from this behavior, which can be understood as the overcorrection needed for efficient generation at larger noise values. Figure 4 is provided for one example structure for the bend component. One can consider any interesting molecule structure, with any challenging chemistry (limited only by the imagination), and repeat the process for any component. Or, if a particular molecule performs poorly during generation, one can study that specific molecule, and query specific terms to search for defects. ---- If you have the patience, let us address some of the weaknesses you reported. * ‘attempts to model nonbonded distances which requires it to sample torsional space during training… torsional space is physically ambiguous’: I don’t understand these sentences. Perhaps our global remarks might be useful in this regard. * what shape is the output from each of the pink blocks? Is it (number of atoms) * 3? Correct. * In some works on molecule generation or conformation prediction, the regression target is the noise that was added to the atom positions, and in other works the regression target is the ‘clean’ atom positions from the training data, so this needs to be clarified. The difference between the two is merely semantics, because the difference between truth and noised coordinates is the amount of noise that is added. * what is ‘conformer inconsistency at the graph level’? Some of the conformer structures provided by the authors of the GEOM-drug data set do not match the molecule structure that they are intended to represent. By “graph level”, we mean connectivity (rather than coordinates). * if you want to experiment with different samplers after training then I think that during training you should sample values of σ from a continuous range, rather than picking from N discrete values This option did occur to us. For reasons of time, we never did try this technique, focusing instead on the interesting problem of generation, since this provided the most significant improvement in performance. * If you sample by stepping through values of σ not seen during training, you will be presenting the score model with inputs it never saw during training, and I expect the performance will be suboptimal. We directly investigated the continuity of the output of each bonded component and can confirm that training consistently produced a smooth function of σ. * The guided generation task in section 7 seems arbitrary. It is. This methodology is not fully developed, which is why we describe it as a proof-of-concept. * I do not understand equation (12) because in the right-hand-side numerator we have a vector y and then subtract F(y) which appears to be a scalar. It is hard to see, but F is rendered in bold and is intended to represent a vector. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to respond to the reviews and in particular for clarifying the major target of the paper, which I had not properly understood. I feel that the aim of the paper should be more clearly stated, and you need to do more to persuade the reader that the problem you address is a practically important one that is not adequately solved by existing tools.
Summary: A diffusion-based conformation generation method called PIDM is proposed for molecules. It combines several geometries including bond lengths, bend angles, proper torsions, chirality and cis-trans of the noisy conformer as the input, and output the scores of the probability to iteratively generate the conformations. Experiments shows the robustness of the model, while is not convincing enough to show the performance of PIDM. Strengths: - The proposed model is simple yet effective, with a smaller NFE used compared to GeoDiff. - Many version of the proposed DIPM has been test in the experiments. Weaknesses: Several problems exist, so that I think it is not a mature enough article to be published. 1. Confusing presentation and limited novelty. - First, it is a physics-informed method, but I cannot find supports that there are some physics knowledge as priori knowledge that works in the model. The five geometries are very commonly used that conformation generation methods focus on, such as torsion angle in TorsionalDiff[1] and distance in CGCF[2]. It is not a novel idea based on physics rule. In the guided generation, some of the physics-based or energy-liked terms is used to modified the generating process, but it is not the main contribution of the paper, but a simple experimental attempt. - Second, the structure of the article is not well-organized. The description of the proposed method is very short, and not completely explained. More details with formulas should be added. The dimension of embedding sizes and other experimental details should be included in the experiment parts. Description of datasets and preprocessing should also placed in the experiment sections. In Sec. 5, most of the details has been given in [3], and the generation process is just an implementation of it, so it can be presented in a brief way, since it is not the main contributions. However, it takes up an entire page, almost the same as the designed method which is your main contributions. - Third, several parts are not well explained. For example, in Figure 2, what is the red arrow means, and what is the black means. Figure 3 is the loss in the training procedure on training a validation set, but what are you try to tell the readers? If you want to show the robustness of the model, some of evidence about other models are not robust should be compared, since most training/validation loss plots seem to be the same like Figure 3. There are also several similar examples. 2. Symmetries are not considered. As a diffusion models, most methods will consider the SO(3) equivariance or E(3) equivariance of the denoisers, leading the probability model to satisfy $p(x) = p(\Pi x)$ in which $\Pi$ is a matrix representation of the certain group. However, in PIDM, the output is scores of 3D-positions each atoms, but the input is the invariant geometries like bond lengths and angles, so if the noisy positions are rotated, the geometries will be unchanged, and the scores will also not be rotated. The equivariance is not ensured. (I am not very certain about it, because the detailed of model is not fully described in Sec.2, and the input and output of the model is mostly based on my guesses.) 3. Incomplete experiments. - The experimental protocols and compared baselines are not convincing. As the new SOTA method, TorsionalDiff shows better experimental performance compared to GeoMol. Both of these two methods generated a conformer based on a prior conformer generated by RDKit. Therefore, at least TorsionalDiff should be included as a baseline, and also, the experimental protocols such as metrics like Recall Coverage/AMR and Precision Coverage/AMR. The GeoDiff is only trained on a very small part of Drugs, but the checkpoint is directly employed for comparison. Is it reasonable? Or it should be trained on the same training set and test on validation set? For GeoMol as another deep-learning-based models, should it be re-trained and tested with the same protocols to should the superiority of the PIDM? - The comparison on geometries of generated molecules and ground-truth molecules is interesting, but some detailed comparison should be added. For example, the distribution of C-C, C=C bond distance of baselines, PIDM, and ground truth, or other bend angles, torsion angles, etc., and the JS-diversity between the distributions. Only the MAD cannot fully demonstrate the superiority of the performance. - In the abstract, it says that the model is resistant to overfitting and explainable. In which part of the experiments gives demonstration of these two advantages of the models. Can PIDM trained on Drugs generalize well on QMugs? If it can, the two datasets consist of molecules of similar sizes (25 ~ 30 atoms), how about smaller molecules in QM9 or larger ones in ligands in PDBBind? How the model shows explainability? - The sampling method used is in [3], what is the performance gain it brings compared to the original score-matching sampling methods? If there is no improvements according to empirical comparison, why it is employed? As I know, TorsionalDiff also use a very small NFE during sampling, so in evaluating effectiveness, does PIDM work better? [1] Bowen Jing, et al, Torsional Diffusion for Molecular Conformer Generation, https://arxiv.org/pdf/2206.01729.pdf [2] Chence Shi, et al. Learning Gradient Fields for Molecular Conformation Generation, https://arxiv.org/pdf/2105.03902v1)%3C%22 [3] Tero Karras, et al. Elucidating the Design Space of Diffusion-Based Generative Models, https://arxiv.org/pdf/2206.00364.pdf Technical Quality: 3 good Clarity: 1 poor Questions for Authors: The questions are given in the weakness part. Please refer to the points that I am doubtful about. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: N. A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our submission and provide a thoughtful response. Given the extent of your comments, and the lack of explicit questions, we will attempt to address the least subjective of the weaknesses you have provided. * The five geometries are very commonly used that conformation generation methods focus on, such as torsion angle in TorsionalDiff[1] and distance in CGCF[2]. Given that the task of generating conformers is to model physical objects, it is natural to borrow elements from classical force fields. The TorsionalDiff model makes no attempt to model the static conformer terms, relying instead on RDKit, a process that requires no physical insight. Concerning CGCF, to label a bond distance as a physical trait requires little insight, since, after all, it is merely a simple parameter of geometry (the distance between atoms). * the generation process is just an implementation of it, so it can be presented in a brief way, since it is not the main contributions. We disagree. It is remarkable that the training portion of a diffusion model can be separated entirely from generation (i.e., a different number of steps and noising schedules). Presenting multiple options for generation is our way of demonstrating this. * Figure 3 is the loss in the training procedure on training a validation set, but what are you try to tell the readers? We are telling the readers that: (1) during training, losses are uniformly decreasing before reaching a plateau which suggests proper optimization; and (2) the loss from the validation set matches that from training. * Symmetries are not considered Symmetry is implied. The inputs (bond distances, bends, and torsion angles) are all invariant under translation and rotation. The output of each bonded component is a displacement along the vector connecting two atoms, which is invariant under translation and rotation. The loss function is calculated from the distance between model and truth, and is also invariant under translation and rotation. The symmetry analysis above is obvious using simple geometric principles. We felt no need to emphasize this point. * so if the noisy positions are rotated, the geometries will be unchanged Your concern is misplaced. All Gaussian smearing is uniform (spherical), and so rotations in this respect are irrelevant. * As the new SOTA method, TorsionalDiff shows better experimental performance compared to GeoMol This model is complementary to our model and not comparable. See our global comments for an explanation. * the experimental protocols such as metrics like Recall Coverage/AMR and Precision Coverage/AMR Please see our global remarks concerning the merits of the RMSD metric against a synthetic data set like QMugs. In short, we do not believe that such a metric represents an appropriate protocol for our work. * The GeoDiff is only trained on a very small part of Drugs GeoDiff refers to the CGCF paper for the data set. For that reference, we quote: > “We randomly draw 40,000 molecules and select the 5 most likely conformations for each molecule” As such, GeoDiff has made the decision to train on a randomly selected 13% of the molecules in the data set and a fraction of the conformers. No explanation is given for why only a portion of the data set is used. Certainly there is no technical barrier. We can assume there was no concern about bias or issue with some sub-population of the data set, since the selection was random. Given no other information, the only conclusion one could draw is that the authors believed that this portion of the data set was adequate for the sake of coverage. We assume the authors stand behind the accuracy of their model so constructed. Note that we make no such unjustified assumptions and simply run our experiments on the entire data set. * For GeoMol as another deep-learning-based models, should it be re-trained and tested with the same protocols We don’t understand this remark. Certainly the authors of GeoMol are the best judge of the manner in which to train it. Note that these authors, like us, chose to experiment with the entirety of the GEOM-drugs data set. * For example, the distribution of C-C, C=C bond distance of baselines Perhaps you are not familiar with organic chemistry, since the lengths of such bonds are dependent on the chemical groups in which they are found. Some examples of alkane bonds (in Angstroms, predicted by GFN2-xTB) are: ``` 1.52167 propane 1.52377 isobutane 1.49187 propylene 1.49526 acetaldehyde 1.49882 isobutylene ``` The MAD of our model indicates a resolution of 0.004 Å, so we would be doing our model a serious disservice by employing such a simplistic measure. * or other bend angles, torsion angles, etc., In these cases, the chemical group has an even stronger influence. Also, do you have a suggestion on how to deal with bend angles in rings? * and the JS-diversity between the distributions We are puzzled why one would compare distributions when access to individual values are available. * Can PIDM trained on Drugs generalize well on QMugs? See Table 2. In case it was not clear, there is only one test data set (extracted from QMugs) used for this table for all models. * If it can, the two datasets consist of molecules of similar sizes (25 ~ 30 atoms), how about smaller molecules in QM9 or larger ones in ligands in PDBBind? We ignore QM9 (it is an ill conceived data set). As we noted in our submission, there is little point in comparing generated conformers to PDBBind, since structures in the protein data bank, in general, use force field constraints in their experimental solutions and will be biased. During development, we tested quite large structures one-by-one and inspected the results visually. The model holds up remarkably well. In the supplemental materials we present a generated structure for micafungin, which has 89 heavy atoms, in addition to containing a large macrocycle.
Summary: This paper introduces a physics-informed generative framework targeted at the task of molecular conformation generation. The method is motivated by the formulation of classical force fields and such an idea is reflected via bond, bend, and proper torsion and properly injected into the model design. The experiments are conducted on GEOM-Drugs and QMugs datasets in terms of molecule conformation generation. Strengths: 1. The paper is well-motivated via the lens of classical force fields and the method contains reasonable integration of the idea. 2. The method is easy to follow with necessary details presented to help understand the entire pipeline of the generation framework. Weaknesses: 1. The experimental evaluations lack the necessary details and discussions, which brings concerns about the credibility of the experimental comparison with the baselines (see Q1 and Q3). 2. Important baselines are missing (see Q2). Minor: Some parts of the presentation of the paper are confusing and may require further checks and polishment, e.g., line 246. In figure 5, is it possible to include some conformations generated by the baselines to offer the readers a clearer qualitative comparison? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In this paper, the authors seem to simply use the open-sourced checkpoints of the baselines like GeoMol and GeoDiff to report the testing results. However, it is uncertain whether the entire training protocol is fair compared with the original setups in these baseline papers. More justifications should be addressed on this point to ensure that the experiments offer a fair comparison. 2. Important baselines, e.g., Torsional Diffusion [1], are missing and should be discussed and compared since they share a similar/relevant idea of leveraging physical priors of bond/bend/torsional angles. 3. Why do the evaluation protocol and metrics seem so different from previous works like GeoMol and GeoDiff? Is it possible to conduct experiments following the widely-adopted setup in order to make a convincing comparison? [1] Jing, Bowen, et al. "Torsional diffusion for molecular conformer generation." Advances in Neural Information Processing Systems 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have satisfactorily discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the reviewer for reading our submission and providing valuable feedback. We would like to begin by answering the given questions. (1) Using open-sourced checkpoints The goal of a conformer generator is to predict conformers in a manner that respects the underlying physics of a molecule. That physics is independent of how a model was trained, assuming, of course, that the training set consists of physically valid samples. As long as the training set has sufficient coverage, it is also independent of the molecule structures contained in that training set. What is wonderfully convenient is that both the QMugs and GEOM-drugs data sets use the same physical basis for their contents: the semi-empirical method GFN2-xTB. We are assuming that authors of both QMugs and GEOM-drugs used this method correctly. Therefore, the physical characteristics of the conformers found in both sets are directly comparable. The remaining question is whether the chemical space of the two data sets sufficiently overlap. We will remind the reviewer that both data sets are intended to represent a diverse selection of drug-like molecules. We will also contest that the atom types contained within the QMugs data set is a subset of the atom types found in the GEOM-drugs data set. Therefore, any model trained on (the full or a random subset of) the GEOM-drugs data set should be expected to reproduce the physical characteristics of (at least the vast majority of) the conformers in the QMugs data set. You may have noticed that we applied this principle to our own models. That is, our models trained on the GEOM-drug data set were compared to the test data set derived from the QMugs data set. You may have also noticed that performance was not significantly affected. In addition, it is our contention that a model that proposes to reproduce conformers for drug-like molecules should be expected to perform correctly on drug-like molecules outside of its training set. We will admit, though, that what it means to be “drug-like” can be a contentious topic. As you may have observed, we went to some trouble to ensure that the test data set was composed of molecule structures that did not structurally overlap with the GEOM-drugs data set in its entirety. This was to avoid data leakage. It also ensures that any model trained on any subset of the GEOM-drugs data set did not use any molecule in the test data set in its training. (2) Important baselines Please see our global remarks concerning the paper [1]. In short, we do not consider this work a valid baseline. (3) Evaluation protocol Please see our global remarks concerning the merits of the RMSD metric against a synthetic data set like QMugs. In short, we do not believe that such a metric represents an appropriate protocol for our work. --- If you will bear with us, we would like to take some additional space to address the weaknesses you have listed. We apologize for the lack of details and discussion. This, unfortunately, was due to the very tight space requirements. The complete details of the model are provided in the supplemental materials. If you have suggestions on how we can improve our submission, within the space limitations, then we welcome any feedback. In particular, we are interested if you believe that material was extraneous or duplicated, or if that major sections, such as on directed generation, deserve to be sacrificed in order to accommodate more model details. Concerning figure 5, we made the judgment that providing additional examples from our work was more important than providing an example from one of the baselines. If we had to select a baseline, we would likely select RDKit. Would this be an addition that you believe would improve the presentation? Please note that for the sake of honesty, we did not cherry pick these examples, but chose them at random. Thus, interesting examples were not guaranteed. We did provide hand-selected examples of interest in the supplemental materials, along with many other (perhaps too many) randomly selected samples.
Rebuttal 1: Rebuttal: We think it is important to clarify certain aspects of molecule conformers, pertaining, in particular, to their use in drug discovery. As domain experts on this particular topic, we can speak authoritatively, and encourage the reviewers to reach out to other domain experts if they have further questions. As discussed in our paper, one can consider two types of degrees of freedom when establishing the conformer of a molecule: 1) Static terms. This includes bond lengths, bond angles, and improper torsions. Bonded interactions generally describe geometric configurations that are (approximately) fixed for all valid conformers. For example, within a set of valid conformers for a molecule, you will find that the bond length between two given atoms are all the same. 2) Variable terms. These are the proper torsions. They have the freedom to vary and can be used to distinguish between the individual members of a set of conformers of a given molecule. Given that varying the proper torsions allows you to move between molecule conformers, it is the proper torsions that are typically varied during ligand-protein docking. This is how Autodock, Gold, and Glide all function. Note that adjusting the proper torsions only allows one to convert between conformers. One must start with a valid conformer consisting of valid static terms. The QMugs and GEOM-drugs data sets are based on conformers generated in vacuum. It should be emphasized that this choice is highly unnatural for drug-like molecules because it corresponds, physically, to a dilute gas. Drug-like organic molecules are almost always found either in solution or in solid form. Or, in the case of PDB structures, as bound to a protein. All of these physically realistic environments produce dramatically different energies than vacuum. To put it succinctly, the set or proper torsion angles found in these data sets have practically no physical significance. Our solution to this data set dilemma is to not be overly concerned about which of the favored values of proper torsion angle are selected during generation. This could be criticized as lazy. However, we will point out that for many applications, such as ligand-protein docking, the choice of proper torsion angle is irrelevant, because these angles are subsequently manipulated by the docking algorithm. In addition, it is a minor algorithmic task (with the exception of macrocycles) to sample torsion angles in any required postprocessing, in order to select that set of torsion angles which are most applicable to the task at hand (for example, the angles that produce the lowest potential energy for a given environment). This brings us to the subject of the RMSD. For the static terms, since the associated atoms are in close proximity, errors in quantities like bond length and angle produce little difference in RMSD. An incorrect bond angle can introduce a large change in a distant atom, because of leverage, but in most molecules, it is possible to compensate by altering a proper torsion angle. As such, the RMSD of the entire molecule is a poor way of measuring accuracy of static terms of a molecule. In contrast, altering the proper torsion angle of a molecule can introduce dramatic differences in RMSD. As such, the RMSD metric, when applied to the molecule as a whole, is primarily a measure of the choice of proper torsion angles. Unless we are interested in reproducing the selection of favored proper torsion angles contained in the test data set, the RMSD metric is, for the most part, useless as a benchmark. We understand that some research efforts in machine learning have relied on the RMSD benchmark against synthetic data sets in their work. Nevertheless, the popularity of a benchmark, in our opinion, is not an appropriate reason to dismiss the drawbacks discussed above. This brings us to the question of selecting a good, physically meaningful benchmark. If we are interested in establishing the accuracy of reproducing the static terms of molecule conformers, then one choice is to directly compare those terms between generated conformers and an independent, test data set. This was our choice. Let’s discuss the paper [1] mentioned by some of the reviewers. We contend that this work is not a suitable baseline for our work. The reason is simple: the goal of this model is opposite and complementary to ours. Here is quote taken directly from the paper: > “We instead propose torsional diffusion, in which the diffusion process over conformers acts only on the torsion angles and leaves the other degrees of freedom fixed. This is possible and effective because the flexibility of a molecule, and thus the difficulty of conformer generation, lies largely in torsional degrees of freedom [Axelrod and Gómez-Bombarelli, 2022]; in particular, bond lengths and angles can already be determined quickly and accurately by standard cheminformatics methods.” Thus, the authors have constructed a model with the sole purpose of sampling from the set of favored proper torsion angles, a task that we have deliberately chosen not to address. They rely entirely on RDKit for the challenging task of establishing the static terms of conformers. Given that our choice of benchmark scheme is independent of the selection from the set of favored proper torsion angles, including [1] directly serves no purpose, since we would only be measuring RDKit a second time. [1] Jing, Bowen, et al. "Torsional diffusion for molecular conformer generation." Advances in Neural Information Processing Systems 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Data Pruning via Moving-one-Sample-out
Accept (poster)
Summary: The paper proposes a new method called moving-on-sample-out (MoSo) to remove less informative samples from the training data. The criteria for removed samples is based on the change in the optimal empirical risk when the sample is removed. As exact calculation of the criterion is computationally challenging, the authors propose an estimator based on gradient information. MoSo shows empirical success compared to baseline methods in data pruning, generalization between networks, and robustness to label noise. Strengths: - The method is based on a simple yet effective idea. - The computational complexity problem of MoSo is effectively tackled with the proposed estimator. - In the experimental section a wide range of comparison methods are used as baselines. - An ablation study is performed to detail the necessity of sub-elements of the method. - The paper is fairly easy to follow. - MoSo is shown to outperform baseline methods in data pruning, generalization between networks, and robustness to label noise in terms of accuracy. Weaknesses: Main points: - The justification for the gradient based estimator of MoSo is solely based on intuition. A more rigorous justification would be desirable. - The initial motivation in the introduction (line 26) states that data pruning ideas can be used to reduce training time. However, within the experimental section, only the accuracy is compared to baseline methods. It would be interesting to see how MoSo compares to baseline methods in terms of training time. Especially because random seems to perform so good and takes essentially no time to score and sample. - It is mentioned that MoSo is aware of training dynamics (line 57). However, it is not entirely clear what is meant by *awareness* and what part of the method is responsible for this. Furthermore, it is not entirely clear how this is different from other methods that use gradients to prune data and why it should be utilized. - The conclusion of Proposition 1.2 is not discussed in the main text. It would be interesting to see how close the estimator is to the true criterion. This could even be done in an experimental setting comparable to the ablation in Figure 2(b). Minor points: - While Tables 1,2,3 are detailed, the same results are already summarized in Figure 1. Hence, those tables seem redundant if the error bars are included in Figure 1 and the tables could be moved to the supplementary material. - Initially, it is unclear that MoSo requires to train a surrogate model to estimate the criterion. This should be highlighted earlier and compared to baseline methods. - Within the experimental section, (all tables and Figure 2(b)) it is unclear how many runs the results are averaged over and if the errors are standard deviations or some other measure of uncertainty. - Within the experimental section, it is unclear how the hyperparameters were chosen for the baseline methods. - Algorithm 1, line 1: $\phi$ is not defined/discussed. - The statement "this suggests that time step sampling is a useful technique for improving the efficiency of our method" in line 305 is not supported by the results in Figure 2(b). After the Rebuttal: I have read all other reviews and all rebuttals. Furthermore, I thank the authors for their answers. I keep my original score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: - Q1: In Algorithm 1, line 1: What is $\phi$? - Q2: How many runs were conducted? Are the aggregated statistics averages over multiple runs? What do the error bars represent? Can you clarify? - Q3: What is the implication of Proposition 1.2? How close is the gap of the estimator? - Q4: Regarding the paragraph about robustness to label noise. It is claimed that MoSo "select informative and clean samples" (line 288). However, it is unclear how MoSo is able to distinguish between informative and clean samples as only test accuracies are compared in Table 5. Is it possible to analyze which fraction of the pruned samples are clean and which are informative? - Q5: Does MoSo suggest an optimal pruning ratio, e.g., by a meaningful cut-off in the distribution of scores? - Q6: Is it possible to analyze whether MoSo actually selects "representative" samples? For example by comparing how a network trained on the pruned data performs on the full test set. - Q7: In Figure 2(a), there is a method called "random". It is unclear what this method does. I assume random data points get pruned, i.e., the score is uniform among all data points and so is the selection distribution. Can you clarify? - Q8: The exact scenario in which training ImageNet would take 45 years (line 152) in unclear. Can you clarify? Preferential comments: - When listing many references as in the introduction it is easier to read if they are sorted [1,2,3,4] instead of [4,1,3,2]. - In the results tables bold the best numbers, as well as those with standard errors that overlap with the best method for fair highlighting of the best results. - Inconsistent presentation of tables. While in Tables 1-3 there are vertical lines between methods, results and avg rank, those lines are missing in Tables 4 and 5. - The statement "famous influence function" (line 188) may be inappropriate. Typos/grammar/other: - Potentially missing related work: - Mindermann, S., Brauner, J. M., Razzak, M. T., Sharma, M., Kirsch, A., Xu, W., ... & Gal, Y. (2022, June). Prioritized training on points that are learnable, worth learning, and not yet learnt. In International Conference on Machine Learning (pp. 15630-15649). PMLR. - References [33] and [34] are the same paper. - Inconsistent spelling: "core set", "core-set", "coreset" - Why are there full stops after enumeration marks, i.e., (i). and (ii).? - Whenever I read "our MoSo" I have the feeling a word is missing, like "our MoSo score" or "our MoSo approach" especially if I expand MoSo (what it stand for). - lines 117-121, 136, 187: I am not sure why *will* appears here. This is not the case later on, for example in lines 211-215. - line 123: we focus on **a** classification task? - line 124: i.i.d. (last dot is missing) - line 126: I would appreciate if you could specify the spaces of newly introduced variables, i.e., $\delta \in (0,1)$. Same holds for $\eta$. - line 126: Please introduce $\mathbf{w}$. - Equations (2) and (6): Is there an empty line in LaTeX before the equation? That would explain the spacing above the equation. - line 137: Please state once that by $z$ you mean a pair $(x,y)$. - line 157: Why "could" and not "can"? - Equation (4) and more: I think the transpose symbol and $T$ are too close. Especially in the supplement where $T$ also appears as an superscript. I would use another transpose symbol, i.e., `\newcommand*{\transpose}{{\mkern-1.5mu\mathsf{T}}}` (looks like $\newcommand*{\transpose}{{\mkern-1.5mu\mathsf{T}}}$) which follows the (DIN) EN ISO 80000-2:2013 standard - line 187: Why are 1.1 and 1.2 not clickable references? - line 193: There is no Proposition 1. - line 203: Please name $I$ as the number of subsets. - line 252: additional blank space - line 256: remove 'when' - line 257: doesn't -> does not - Sometimes I see $z:(x,y)$ or $\epsilon:0$ in which I read and interpret the colon symbol as an equal sign and I think this notation is confusing for some people. - It would have helped to start the counters (line numbers, equations, etc) in the supplementary material from the counters of the main paper. - Appendix Equation (5): $\mathcal{L}$ is a function, not a set (below the sum symbol). I guess $\mathcal{S}$ is meant here. Also, consider putting brackets around the terms to clarify that both terms are within the sum! As mentioned earlier, I consider the notation $\epsilon:0$ confusing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are only mentioned in the supplementary material but could be more rigorously discussed in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Sincerely thanks for your appreciation of our work. Hoping our response will address your concerns. --- **Q1: A rigorous justification of MoSo is desirable.** A1: Thanks! Proposition 1.2 gives rigorous proof of the approximation accuracy, and we have also updated it in the support material to give a tighter bound. Here, we restate a conclusion: $$|M(z) - \hat{M}(z)| \leq O((\ell \eta + 1)gT + \eta g^2T).$$ MoSo assigns high scores to samples that can reduce the empirical risk. Please refer to Line 112 in the supplementary material for details. **Q2: How data pruning ideas can be used to reduce training time.** A2: Thanks! We compare the training efficiency of our MoSo approach with baselines on CIFAR-100 using ResNet50, with 8 Tesla V100 GPUs. MoSo achieves the best trade-off between computational requirements and performance, making it the best-performing model with reasonable computational demands. The overall time cost of MoSo (102.7 min) and re-training the network on the selected subset (61.7 min) is less than directly training a network on the full set (192.2 min). What's more, our method offers more possibilities for the efficient storage of data and the subsequent efficient training of more models [A]. | Method | Time-cost (surrogate training) | Time-cost (scoring)| Time-cost (total)| Accuracy (Pruning-ratio 60%)| | ---- | ---- | ---- | ---- | ---- | | Random | 0 | 0 |0 | 64.32 | | GraNd | 192.2 m | 683.8 m | 876.0 m | 60.52 | | OPT | 192.2 m | $\geq 1$day | $\geq 1$day | 58.93 | | Moderate | 192.2 m | **6.63 m** | 198.8 m| 64.92 | | MoSo | **46.5 m** | 56.2 m | 102.7 m | **68.97** (within 61.7 min) | **Q3: What is meant by awareness and what part is responsible for this?** A3: Thanks! The awareness of training dynamics is to consider the network information at different stages of the training process. The average across different epochs in Proposition 1.1 is responsible for this. Please refer to Figure 2(a) in the paper for the ablation of this part. **Q4: The difference from other gradient-based methods.** GraNd is the best-known scheme that also uses gradient information for data pruning. GraNd retains samples with large gradient norms, while our MoSo approach uses the gradient vector which retains more potentially useful information than the norm and approximates the empirical risk. **Q5-1: The conclusion of Proposition 1.2 is not discussed.** Please refer to A1. And we will add further discussion about this. **Q5-2: How close the estimator is to the true criterion?** A5-2: Good question! Because of the word limit, please check Q1 in Global-Rebuttal (https://openreview.net/forum?id=vO6ZdPWaHc&noteId=DsrmXiWeKr) for details. **Q6&7: The same results in Tables 1/2/3 are already summarized in Figure 1. It should be highlighted that MoSo requires training a surrogate model to estimate the criterion.** A6&7: Thanks, we accept your suggestions and will make modifications accordingly! **Q8: About the number of runs and the kind of errors.** A8: The results are the average of 5 independent runs. The standard deviation is selected as the measure of uncertainty. 

 **Q9: How the hyperparameters were chosen for the baseline methods?** Thanks! Regarding the experimental settings for the baselines and our MoSo approach, we largely followed the framework used in the Moderate paper for fairness of comparison. **Q10: That is the meaning of $\phi$.** A10: $\phi$ means the null set. **Q11: The statement in line 305 is not supported by Figure 2(b).** A11: Thanks for the careful review! We will add the computational cost comparison and delete this statement. **Q12: What is the implication of Proposition 1.2?** Thanks! Please refer to A1. **Q13: Is it possible to analyze which fraction of the pruned samples are clean and which are informative?** A13: Thanks! This is a good question! We present detailed statistics on TinyImageNet with 20% label noise, totaling 100,000 data samples. After pruning 80% of the data with either MoSo or random selection, we observe MoSo reduces the noise ratio of the retained data (from 20% to 14%). | MoSo | Noisy data| Clean data | Noise Ratio| | ---- | ---- | ---- | ---- | | Retained subset | 2795 | 17205 | 14% | | Discarded (Pruned) | 17205 | 62795 | 22% | **Q14: Does MoSo suggest an optimal pruning ratio?** Ans: Good question! We think it should be artificially defined since it balances efficiency and performance. What MoSo now can do is provide a good pruning suggestion given the pre-defined pruning ratio. **Q15: Whether MoSo actually selects "representative" samples?** A15: Good question! Here, we compare networks trained on the subset with top-20% largest MoSo scores and the subset with 20% smallest MoSo scores. The former is significantly better than the latter, with a 7% higher top-1 accuracy. This confirms that MoSo-score can reflect the importance of samples to some extent. | Subset | top-1 acc on CIFAR-100| | ---- | ---- | | largest 20%| 54.38 | | smallest 20%| 47.99| **Q16: In Figure 2(a), what is the method called "random"?** A16: Thanks! Random (selection) means a completely random selection from a data set. It is a strong baseline as many methods cannot beat it. **Q17: The exact scenario in which training ImageNet would take 45 years (line 152) is unclear.** A17: Thanks! Considering the fastest case, before scoring a sample, we choose a small surrogate network with unsupervised initialization and only finetune the last layer. Such a process needs $>23$ mins. So, the overall time cost for scoring all (1 million) samples is at least about $45$ years. [A] Cody Coleman, et.al.: Selection via Proxy: Efficient Data Selection for Deep Learning. ICLR-2020. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. They definitely increased the clarity for me. Minor remark: > A10: $\phi$ means the null set. I think $\phi$ (phi) is misleading and there are better symbols for that, e.g., `\emptyset` yields $\emptyset$. --- Reply to Comment 1.1.1: Title: Sincerely thanks for the response from the Reviewer DWe5 Comment: We are profoundly grateful for your exceptionally detailed and thoughtful feedback. Your comments are truly invaluable for strengthening the quality of the manuscript. We sincerely appreciate your selfless contributions to the academic community. The professionalism and rigor you demonstrate as a reviewer commands our deepest respect. We promise to thoroughly update the paper according to your suggestions.
Summary: This paper proposes a framework for data pruning that retains important samples while considering the training dynamics. While the overall methodology relies on analyzing the change in empirical risk from removing individual points, the paper introduces a first-order approximation algorithm that can be efficiently computed. Numerical results demonstrate the effectiveness of the method. Strengths: - The motivating idea is intuitive. - Numerical results on CIFAR100 and TinyImagenet suggest the method is effective. Weaknesses: - The proof for Proposition 1.1 seems to be incorrect. Specifically $L(S/z, w)$ is defined as $L(S,w) – l(z, w) / N$, however this ignores that the empirical risk when we remove a point also needs to be re-normalized to $1/(N-1)$. I believe this breaks the proof steps from page 4 onwards. I suspect that this mis-specification carries on to the remaining theoretical analysis as well. - The mathematical presentation is also generally unclear. - For example, in the Proof to Proposition 1.1., $L^t$ is used without definition. - In Proposition 1.2, the loss function is assumed Lipschitz in parameters for fixed data, but what the loss function is Lipschitz in is not clear. - Minor comment: there are also general presentation issues and errors in the main paper, although the authors have caught revisions in the Appendix. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please see comments about Proposition 1.1 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The paper includes a discussion on limitations in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer XiFz: Thank you also for your constructive suggestions. We will carefully address each of your concerns and revise the manuscript accordingly. If you have any new feedback please do not hesitate to let us know! We will do our best to answer your feedback! --- **Q1: The proof for Proposition 1.1 seems to be incorrect. Specifically, $L(S/z, w)$ is defined as L(S, w) - l(z, w)/N, however, this ignores that the empirical risk when we remove a point also needs to be re-normalized to 1/(N-1).** A1: Thanks! Let me address your concern. In fact, the approximation of this coefficient is not wrong but is common and necessary. We can explain this in ways. 1. Since N is very large in practice (e.g., 1M for ImageNet-1K), hence, the difference between $\frac{1}{N}$ and $\frac{1}{N-1}$ is negligible. 2. Such an approximation is widely used by many previous works, most notably the ICML-2017 Best Paper [A] and its countless follow-ups. Please check Section 2.1 of the paper [A]. 3. Following your suggestion, we also present an approximator based on the exact coefficient $\frac{1}{N-1}$ according to your suggestion, that is, $$M(z) \approx E_{t}\Big( \frac{\eta_t (2N-3)}{(N-1)^2} ||G^t||^2 - \frac{\eta_t}{(N-1)^2} ||g^t||^2 + \frac{\eta_t(2N-4)}{(N-1)^2} (G^t)^\mathrm{T}g^t\Big),$$ where $G^t = \nabla L(S, w^{t}_S)$, $g^t = \nabla l(z, w^t_S)$, $\eta_t$ is the learning rate, and N is the number of all training data. We applied the new estimator to data pruning experiments on CIFAR-100, comparing it to the original MoSo estimator. The new estimator behaves similarly to the original one. | Estimator | Pruning-ratio 20% | Pruning-ratio 40% | Pruning-ratio 60% | Pruning-ratio 80%| | ---- | ---- | ---- | ---- | ---- | | Original estimator in the paper | 75.76 | **74.29** | **68.97** | 54.38 | | New estimator derived here | **75.81** | 73.95 | 68.48 | **54.45** | **Q2-1: The unclear mathematical presentation: in the Proof to Proposition 1.1., $L^t$ is used without definition.** A2-1: Thank you for the feedback. The loss functions $L(\cdot)$ and $l(\cdot)$are defined in Line 158 of the main paper. The superscript t refers to the t-th training epoch, as noted in Line 94 of the supplementary material. $L^t$ represents the loss value at epoch t. We will clarify these definitions in the revised paper. **Q2-2: The unclear mathematical presentation: In Proposition 1.2, the loss function is assumed Lipschitz in parameters for fixed data, but what the loss function is Lipschitz is not clear.** A2-2: Thank you for pointing this out. The proof relies only on the Lipschitz-continuity assumption for parameters, not the specific loss function. This suggests MoSo could apply more broadly, like data reduction for multimodal pretraining. In the paper, we state the loss function used is cross-entropy (Lines 125/133/138 in the main paper). The proof's generality indicates MoSo's potential beyond classification tasks. We will clarify that the proof holds for any continuously differentiable loss function satisfying Lipschitz continuity. **Q3: Minor comment: there are also general presentation issues and errors in the main paper, although the authors have caught revisions in the Appendix.** A3: Thanks. We will further check and polish the presentation in the revised paper. [A] Koh P W, Liang P. Understanding black-box predictions via influence functions. ICML-2017. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. My concern with the $1/N \approx 1/(N-1)$ approximation is that in the original influence functions paper [A], this is not used for a rigorous proof (c.f. Proposition 1.1). However in this draft, M(z) given in definition 1 is different from M(z) redefined in line 94-95 of the Supplement. I agree that this difference may be small in practice and I commend the authors for deriving a corrected version of the approximation and for running some experiments showing that the corrected version does not yield a major difference from the initial. I encourage the authors to revise the paper, for example by beginning with the rigorous version and then demonstrating approximations or by forgoing the theoretical statements and presenting the approximator via mathematical steps. --- Reply to Comment 1.1.1: Title: Thanks for the comments Comment: We sincerely appreciate your time and efforts in reviewing our work, which helps improve our paper. Regarding your concerns, we would like to make the following further clarifications. We hope our response addresses your concerns and look forward to discussing them with you. --- **Q1. The coefficient approximation is used in the original influence functions paper [A], however, it is not used for rigorous proof.** The original influential work uses this approximation in deriving influence functions but does not provide a theoretical analysis of its tightness. However, subsequent papers have rigorously employed it in formal proofs: Paper [B] provides error bounds on influence estimate accuracy leveraging this approximation. Paper [C] theoretically shows influence functions can identify samples to relabel for lower test risk, relying on this simplified coefficient. Work [D] formally relates worst-case risk change rates to single sample loss perturbation using this approximation. And OPT [E] derives generalization error bounds for influence-based data pruning, founded on this established approximation. In summary, while tightness was not analyzed initially, many papers since have formally adopted this approximation within rigorous mathematical proofs and bounds. This demonstrates its acceptance in theoretical analyses, beyond just the initial empirical derivation. Our work aligns with this trend of employing the approximation in formal contexts, rather than solely heuristically. --- **Q2. I encourage the authors to revise the paper, for example by beginning with the rigorous version and then demonstrating approximations or by forgoing the theoretical statements and presenting the approximator via mathematical steps.** We appreciate your thoughtful suggestions! As a compromise, we will revise the paper to include the coefficient approximation in the tightness proof for Proposition 1.2, while still ensuring overall simplicity. This yields the following new tightness bound: $$O( |\mathcal{M}(z) - \hat{\mathcal{M}}(z)|) \leq {O}\Big( (\ell\eta + 1) g T + \eta g^2 T (1 + \frac{3}{N}) \Big). $$ Compared to the original Proposition 1.2 bound, this just introduces one additional negligible term $\frac{3}{N} \eta g^2 T$, since $N$ is typically very large. By incorporating the approximation only in the key tightness analysis, we aim to strike a balance between mathematical rigor and manuscript clarity/conciseness. The impact on the final bound is marginal, yet it formally addresses the coefficient concern. Please let us know if you feel this targeted revision satisfactorily resolves the approximation issue while maintaining readability. We appreciate you working with us to improve the paper while preserving its accessible style. --- [A] Koh P W, Liang P. Understanding black-box predictions via influence functions. ICML-2017. [B] Zhifeng Kong, Kamalika Chaudhuri. Understanding Instance-based Interpretability of Variational Auto-Encoders. NeurIPS-2021. [C] Shuming Kong, Yanyan Shen, Linpeng Huang. Resolving Training Biases via Influence-based Data Relabeling. ICLR-2022 [D] Zifeng Wang, et. al. Less Is Better: Unweighted Data Subsampling via Influence Function. AAAI-2022 [E] Shuo Yang, et. al. Dataset Pruning: Reducing Training Data by Examining Generalization Influence. ICLR-2023 --- Reply to Comment 1.1.2: Title: Looking forward to your further reply Comment: Dear Reviewer XiFz: We sincerely thank you for your efforts in reviewing our paper and your suggestions for polishing the manuscript. As we are approaching the end of the discussion period, we would like to ask whether there are any remaining concerns regarding our paper or our response. We are happy to answer any further questions. Best regards, Submission909 Authors
Summary: This paper presents MoSo, a method to identify and remove the least informative samples from a large dataset. The underlying idea is to consider the impact of each sample on the optimal empirical risk. Quantifying this exactly requires leave-one-out retraining for every point, which is intractable. So, the authors provide an approximation of this score that is more efficient in that it does not require leave-one-out retraining for every point. They provide bounds on the quality of this approximation and present empirical comparisons of the data pruning approach with contemporary baselines. Strengths: * The proposed approximation of the MoSo score and the accompanying analysis are sound and novel to the best of my knowledge. * The authors present theoretical results establishing the accuracy of their approximation. * Empirical evaluations on benchmark vision tasks are provided that support the effectiveness of the method relative to state-of-the-art baselines. Weaknesses: * The authors state that data pruning can address the computational challenges in the introduction. However, it is not clear to me how the method can provide a computational speedup given that it needs to train a surrogate model on the entirety of the dataset (Line 8 of Algorithm 1) to compute the MoSo score approximation. * The claimed asymptotic computation times are difficult to understand. Upon looking at the algorithm, it seems that the power of the approximation in Eq. (4) comes from the fact that the surrogate network needs to only be trained once, rather than once per each point (as required by Eq. 3). This should be clarified. Please see the Questions section for more details. * The claim that randomly sampling a few time steps rather than considering all $T$ steps “reduces the overall complexity to be less than $\mathcal O(Tn)$” does not seem sound since the expectation of a uniform sample from $\\{1, \ldots, T\\}$ is $(T+1)/2$. * Only the ResNet50 architecture is used exclusively throughout the experiments to construct the pruned datasets. Diversity of architecture in the evaluations would have strengthened the method’s appeal. * The computational complexity of the method is quite high as it requires training the model on the entire dataset relative to the compared methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is not clear to me why the original MoSo score “takes $\mathcal O(Tn^2)$ time.” More generally, the asymptotic complexities are confusing throughout because neither the batch size of SGD, nor the dimensionality of the data points $d$ are accounted for in the asymptotic analysis. Are we assuming that the batch size = $n$, i.e., regular GD and dimensionality = 1 for the samples? 2. Related to the above, I don’t understand how the approximation in Eq. (4) is faster in an asymptotic sense than the original MoSo score in Eq. (3). Assuming that batch size = $n$ and ignoring the dimensionality of the points as the authors do, two full rounds of training to obtain $w_{\mathcal S}^*$ and $w_{\mathcal S \setminus z}^*$ take $\mathcal O(n T)$ time overall (based on the way the authors express training time). Once we have those two models, computing the difference of losses in Eq. (3) takes $\mathcal O(n)$ time. Where is the quadratic in $n$ coming from? It would help to clarify that Eq. (4) requires only training the model once (as in Line 8 of Alg. 1), unlike Eq. (3) which requires computing $w_{\mathcal S \setminus z}^*$ for each $z$. 3. What is the appeal of the data pruning method from a practical efficiency perspective, given that it requires training the surrogate network for $(T+1)/2$ iterations in expectation on the full dataset? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, they are mentioned in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer QDiJ: Thanks for your time and efforts in reviewing our paper. We will address your concerns below. --- **Weaknesses** **Q1: It is not clear to me how the method can provide a computational speedup.** A1: Thanks for your suggestion. We evaluated MoSo and the other baseline methods on a server with 8 Tesla V100 GPUs. We used the CIFAR-100 dataset and the ResNet50 backbone for our experiments. MoSo achieves the best trade-off between computational requirements and performance, making it the best-performing model with reasonable computational demands. Notably, it outperforms the state-of-the-art method, Moderate, while being more efficient. The overall time cost of MoSo (102.7 min) and re-training the network on the selected subset (61.7 min) is less than directly training a network on the full set (192.2 min). What's more, our method offers more possibilities for the efficient storage of data and the subsequent efficient training of more models. | Method | Time-cost (surrogate training) | Time-cost (scoring)| Time-cost (total)| Accuracy (Pruning-ratio 60%)| | ---- | ---- | ---- | ---- | ---- | | Random | 0 | 0 |0 | 64.32 | | GraNd | 192.2 m | 683.8 m | 876.0 m | 60.52 | | OPT | 192.2 m | $\geq 1$day | $\geq 1$day | 58.93 | | Moderate | 192.2 m | **6.63 m** | 198.8 m| 64.92 | | MoSo | **46.5 m** | 56.2 m | 102.7 m | **68.97** (within 61.7 min) | **Q2: The claimed asymptotic computation times are difficult to understand. It is not clear why the original MoSo score “takes $O(Tn^2)$ time and how the approximation in Eq. (4) is faster.** A2: We will clarify this below. Hope it can address your concern. MoSo without approximation requires $n$ times full training of the network, with each training a network on $n-1$ data samples for $T$ epochs. Consequently, it has a theoretical complexity of $O(Tn(n-1)) \approx O(Tn^2)$. Our MoSo estimator avoids this costly leave-one-out retraining. It only require training a network on the full dataset ($n$ data samples) for $T$ epochs. Thus, the time complexity is substantially reduced to $O(Tn)$, providing a notable gain in efficiency. **Q3: The claim that randomly sampling a few time steps rather than considering all T steps “reduces the overall complexity to be less than O(Tn)” does not seem sound since the expectation of a uniform sample from {1,...,T} is (1+T)/2.** A3: We would like to clarify that the time complexity primarily concerns the number of iterations needed to compute the required gradients in Equation 4, which is the most time-consuming process. Assuming the surrogate network is trained with T steps (T=50), our MoSo estimator requires estimating the gradients at each iteration, resulting in an O(Tn) time cost. In our implementation, we found that this process can be accelerated by randomly sampling t steps (t=10,t<T) from all the T steps (Line 231) to perform gradient estimation and take their average. So the actual complexity is O(tn) which is less than O(Tn) as t<T. Figure 2b in the paper shows the effect of the sampling ratio (t/T), demonstrating that such a sampling operation trades a slight drop in performance for a 5-fold increase in speed. **Q4: Only the ResNet50 architecture is used exclusively throughout the experiments to construct the pruned datasets. Diversity of architecture in the evaluations would have strengthened the method’s appeal.** A4: Thanks! According to your suggestion, we conduct experiments on various architectures and report the results in the following table. Notably, the generalization ability of MoSo is well since the data is scored by using ResNet-50 as the surrogate network. | Network | Dataset | Random (PR: 80%)| MoSo (PR: 80%) | | ---- | ---- | ---- | ---- | | GoogleNet | C-100 | 59.2 | 62.37 | | MobileNetV2 | C-100 | 49.25 | 51.31 | | DenseNet121 | C-100 | 56.92 | 57.49 | | Swin-T | IN-1K | 67.20 | 72.66 | **Q5: The computational complexity of the method is quite high as it requires training the model on the entire dataset relative to the compared methods.** A5: Pre-training a surrogate network before pruning is a commonly-used procedure. Almost all the compared methods except random selection require training a surrogate network on the full dataset before pruning. The previous pruning methods need to train the surrogate network until convergence, normally using the same number of epochs as training on the pruned dataset (e.g., 200 epochs on CIFAR-100). Notably, our MoSo does not require full convergence of the surrogate network, for example, we only use 50 epochs on CIFAR100 instead of the original 200 epochs. In summary, our MoSo achieves better performance with less computational cost. **[Questions]. Why the original MoSo score “takes $O(Tn^2)$ time and how the approximation in Eq. (4) is faster?** Thank you for the feedback. Please refer to A2. **[Q6] What is the appeal of the data pruning method from a practical efficiency perspective** Thanks for your comments! First, I think there must be some misunderstanding about “MoSo need (T+1)/2 training epochs”. Please refer to A3, which will address your question. With the smaller but informative subset selected by MoSo (e.g. only 20% of the data), we can do many things, like reducing storage overhead, decreasing training cost for training subsequent models [A], and even enabling continual learning [B]. [A] Cody Coleman, et.al.: Selection via Proxy: Efficient Data Selection for Deep Learning. ICLR-2020. [B] Jachong Yoon, et.al.: Online Coreset Selection for Rehearsal-based Continual Learning. ICLR-2022. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I read the other reviews, your responses to them, and the general response. In light of the clarifications and compelling experimental results regarding computational efficiency of the method, I decided to raise my score to a 6. --- Reply to Comment 1.1.1: Title: A Grateful Response to Reviewer QDiJ Comment: We are truly grateful for your encouraging feedback. It means a great deal to know our work resonated positively. We welcome any additional questions you may have during the discussion period and are more than happy to provide clarification. Furthermore, we will continue refining the evaluation section to more clearly convey our contributions. Our aim is to produce the highest quality work that lives up to the standards of yourself and the broader research community. Thank you again for recognizing our efforts - it inspires us to keep improving. We sincerely appreciate you taking the time to provide such thoughtful and constructive feedback.
Summary: This paper presents moving-one-sample-out (MoSo) for the purpose of coreset selection. This algorithm measures how empirical loss changes when excluding individual points during training. The paper introduces an approximation and other tricks to make this method computationally feasible. Experimentally, MoSo outperforms other methods on standard datasets, and also has nice properties like generalization to other architectures and robustness to label noise. Strengths: - Good presentation: method is clear and contextualized properly through related work and comparison experiments - Good results: MoSo does better on all datasets evaluated than the comparison baselines - Method seems new and is differentiated from related work - Interesting ablations on architecture generalization and label noise Weaknesses: - Lack of analysis on the computational cost of this method. Comparisons use same amount of samples seen during training, but this does not take into account the extra cost of calculating the coreset for some of these methods. This can be addressed with end-to-end training time or a similar metric. Though the authors mention that this is substantially cheaper than methods that require training a full-network for scoring, this information is still important, especially when considering scaling up this method to larger datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses Is there a reason why experiments aren't done on CIFAR-10? This seems to be a standard benchmark for this line of work. Would also be interested in seeing this method evaluated on a larger dataset or other tasks (e.g. large-scale multi-modal learning mentioned in supplementary) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Addressed in weaknesses - lack of analysis on computational cost. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer eYrR: Thank you for appreciating our approach. We will address your comments below. ---- **Q1: Lack of analysis of the computational cost of this method.** A1: This is a good question! We will incorporate this experiment into the paper. We evaluated MoSo and the other baseline methods on a server with 8 Tesla V100 GPUs. We used the CIFAR-100 dataset and the ResNet50 backbone for our experiments. MoSo achieves the best trade-off between computational requirements and performance, making it the best-performing model with reasonable computational demands. Notably, it outperforms the state-of-the-art method, Moderate, while being more efficient. Because of the use of large-scale linear programming in the scoring phase, OPT is significantly more time-consuming than the other methods. | Method | Time-cost (surrogate training) | Time-cost (scoring)| Time-cost (total)| Accuracy (Pruning-ratio 60%)| | ---- | ---- | ---- | ---- | ---- | | Random | 0 | 0 |0 | 64.32 | | GraNd | 192.2 m | 683.8 m | 876.0 m | 60.52 | | OPT | 192.2 m | $\geq 1$day | $\geq 1$day | 58.93 | | Moderate | 192.2 m | **6.63 m** | 198.8 m| 64.92 | | MoSo | **46.5 m** | 56.2 m | 102.7 m | **68.97** | --- **Questions.** **Q2: Is there a reason why experiments aren't done on CIFAR-10? This seems to be a standard benchmark for this line of work.** A2: We primarily followed the experimental setup in the recent method, Moderate [A]. Here, we have conducted experiments on CIFAR-10 and present the results in the Table below. Our method demonstrates comparable performance to existing methods at low pruning ratios and surpasses them at high pruning ratios. It's worth noting that CIFAR-10 is a smaller dataset compared to CIFAR-100, Tiny-ImageNet, and ImageNet, which are evaluated in our paper. Our method exhibits superior performance on larger datasets, effectively showcasing the potency of our model. In the case of small-scale datasets like CIFAR-10, there isn't a compelling reason to employ pruning. | Method | 20% | 40% | 60% | 80% | | ---- | ---- | ---- | ---- | ---- | | Random | 93.05 | 92.19 | 89.77 | 85.20 | | Forgetting | 94.52 | 93.33 | 91.41 | 86.12 | | EL2N | **94.59** | 93.77 | 92.24 | 85.23 | | Moderate | 94.05 | **93.81** | **93.10** | 86.05 | | MoSo | 94.20 | 93.60 | 93.05 | **86.26** | **Q3: Would also be interested in seeing this method evaluated on a larger dataset or other tasks (e.g. large-scale multi-modal learning mentioned in supplementary).** A3: We have conducted experiments on the CC3M dataset, which contains 3 million image-text pairs, to train a CLIP [B] model using two backbone architectures: ResNet50 and ViT-B. Following CLIP [B], we evaluate the trained model in zero-shot image classification. The result shown in the Table below demonstrates notable improvements over the random selection baseline. Notably, after removing 80% of the data, we observe a 3.2% increase in performance. Furthermore, data selected using the ResNet50 backbone also enhances the performance of the transformer-based architecture, ViT-B, outperforming the random baseline by 2.6%. This showcases the generalization ability of the pruned data. | Method | Training Data | Zero-shot classification on ImageNet | | ---- | ---- | ---- | | CLIP (R50) [B] | Full dataset | 16.7 | | CLIP (R50) | Random selection 20% subset | 5.9 | | MoSo (R50) | MoSo (ours) Selection 20% subset using CLIP(R50) | 9.1(+3.2) | | CLIP (ViT-B) [B] | Full dataset | 16.1 | | CLIP (ViT-B) | Random selection 20% subset | 5.5 | | CLIP (ViT-B) | MoSo (ours) Selection 20% subset using CLIP(R50) | 8.1(+2.6) | [A]. Xiaobao Xia, et.al.: Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning. ICLR-2023 [B]. Alec Radford, et.al.: Learning Transferable Visual Models From Natural Language Supervision. ICML-2021 --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response. I have adjusted my score as my concerns have been addressed. --- Reply to Comment 1.1.1: Title: Sincerely thanks for the response! Comment: We sincerely thank the reviewer for their generosity in time and feedback, and we are also incredibly grateful to the reviewer for their willingness to reconsider and increase their score after reviewing our detailed response! We are excited to continue refining the work guided by the reviewer's suggestions!
Rebuttal 1: Rebuttal: **Q1: How close the estimator is to the true criterion?** A1: This is a good question! We will add this to the revised paper! We compare the approximation error comparison between ours and the well-known influence function, please refer to Figure 1 in the PDF attachment. Our method exhibits better approximation performance, with a Spearman correlation of 0.45041617686 to the real MoSo value, while the influence function is only with a Spearman correlation of 0.03515931916 to the real value. **Q2: Does the pruning change the data sample distribution, and does the method improve/hurt certain classes?** A2: Thanks for the constructive question! We will add this to the revised paper! We visualize the class-wise accuracy before and after applying our MoSo data pruning approach, please refer to Figure 2 in the PDF attachment. We can observe that the correlation between the two is very significant, where the Spearman correlation coefficient is 0.9133613248, and the P value is 0.0295558770. This shows that the performances before and after being pruned with MoSo are quite consistent, and no significant improvement or harm to a particular category was observed. We investigated whether the data in each category was balanced after applying our MoSo data pruning approach, please refer to Figure 2 in the PDF attachment. Ideally, in the most balanced case, the number of data in each category is 100. It is not difficult to find that the number of data categories is quite balanced for our approach, where the mean is 99.99, var is 9.006. Pdf: /pdf/915debacce7cf83a896c095f18e5ce2b80f1cc4f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a data-pruning method. The authors argue that sample importance should not be determined by sample difficulty. Alternatively, they present the MoSo score, which quantifies the changes in empirical risk upon excluding a single data point. An efficient approximation for MoSo is proposed to calculate the score with some theoretical guarantee. Experiments are conducted with common image classification benchmarks. Strengths: [1] The proposed measure of sample importance is intuitive, and the approximation appears novel. [2] The paper is overall well-written, with a clear description of the insights and adequate related work. Weaknesses: [1] The authors propose an approximation for the MoSo score. However, it is unclear from the experiment how expensive the proposed method is compared to the baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: [1] I appreciate the set of experiments with label noise, as it could provide evidence for the authors' insight on difficult data. I would like to see the overlapping ratio between samples with injected noise and pruned data. This overlapping can be more informative than simple test accuracy in demonstrating whether the proposed method can exclude hard but noisy data. [2] The authors mainly report the trade-off of top-1 accuracy and data pruning ratio, which could demonstrate the effectiveness of a method. It would be interesting to see the data pruning ratio across classes to better understand the method. Does the pruning change the data sample distribution, and does the method improve/hurt certain classes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors didn't discuss limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer xfpA: Thank you for your comments which help us improve our work! ---- **Q1: How expensive the proposed method is compared to the baselines?** A1: Thanks for your suggestion. We will incorporate the additional results into the paper. We evaluated MoSo and the other baseline methods on a server with 8 Tesla V100 GPUs. We used the CIFAR-100 dataset and the ResNet50 backbone for our experiments. MoSo achieves the best trade-off between computational requirements and performance, making it the best-performing model with reasonable computational demands. Notably, it outperforms the state-of-the-art method, Moderate, while being more efficient. Because of the use of large-scale linear programming in the scoring phase, OPT is significantly more time-consuming than the other methods. | Method | Time-cost (surrogate training) | Time-cost (scoring)| Time-cost (total)| Accuracy (Pruning-ratio 60%)| | ---- | ---- | ---- | ---- | ---- | | Random | 0 | 0 |0 | 64.32 | | GraNd | 192.2 m | 683.8 m | 876.0 m | 60.52 | | OPT | 192.2 m | $\geq 1$day | $\geq 1$day | 58.93 | | Moderate | 192.2 m | **6.63 m** | 198.8 m| 64.92 | | MoSo | **46.5 m** | 56.2 m | 102.7 m | **68.97** | ---- **Q2: The overlapping ratio between samples with injected noise and pruned data.** A2: Thanks! This is a good question! We present detailed statistics on TinyImageNet with 20% label noise, totaling 100,000 data samples. After pruning 80% of the data with either MoSo or random selection, we observe MoSo meaningfully reduces the noise ratio of the retained data (decreasing from 20% to 14%). | MoSo | Noisy data| Clean data | Noise Ratio| | ---- | ---- | ---- | ---- | | Retained subset | 2795 | 17205 | 14% | | Discarded (Pruned) | 17205 | 62795 | 22% | **Q3: Does the pruning change the data sample distribution, and does the method improve/hurt certain classes?** A3: This is a very important question, and we have visualized and analyzed the relevant statistical results, which will be included in the revised paper in the future. See Q2 in Global-Rebuttal (https://openreview.net/forum?id=vO6ZdPWaHc&noteId=DsrmXiWeKr) for details. **Q4: The authors didn't discuss limitations in the paper.** A4: Please refer to Sec. 5. in the supplementary material, where we discussed the limitations and future works. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Additional experiments and visualization provide further details to understand the proposed method. Thank the author's response, and I am happy to increase my score. --- Reply to Comment 1.1.1: Title: A Grateful Response to Reviewer xfpA Comment: We sincerely thank the reviewer for taking the time to thoroughly review our additional experiments and visualizations. The reviewer's openness to increasing their score after considering our response is greatly encouraging. We are motivated to continue improving the work based on this thoughtful feedback.
null
null
null
null
null
null
Imitation Learning from Imperfection: Theoretical Justifications and Algorithms
Accept (spotlight)
Summary: This paper considers the problem of offline imitation learning with supplementary data with optimality not guaranteed. The paper gives theoretical analysis on the performance gap bound between expert policy and learner's policy for behavior cloning (BC) on expert data only and naively using BC over the union of expert and supplementary data. Based on the analysis, the paper proposes a provably better method than BC, which is ISW-BC. ISW-BC uses importance sampling (with a lower threshold) between state-action pairs to "correct" the learning from non-expert state-action occupancy onto expert state-action occupancy. In several mujoco and Atari environments, ISW-BC works comparably well or better than the SOTA methods, DemoDICE and DWBC. Strengths: **1. Good writing that is easy to follow and clearly conveys the idea.** The paper is a math-heavy one with many pages of theoretical proofs; however, the core result is well-summarized by Tab. 1, the theorems, and Eq. 3-5. Besides, for readers that are interested in theoretical results with function approximators, the theorems are kindly summarized in Sec. E in the appendix, leaving the long proof details in the next section. The limitations, broader impact and computational resource are all well-discussed. **2. Simple but effective idea.** The idea of ISW-BC proposed by the paper is very simple; it only requires separate training of a discriminator and an actor, which is quite easy to implement. However, such method is theoretically guaranteed to be better than BC and is indeed better than many baselines in multiple environments. **3. Solid theoretical and practical results.** The theoretical results clearly shows that how bad could it be when we are treating non-expert data as expert data in behavior cloning (BC), and how the quality (value function) of the non-expert data affects the result. Based on this, the author proposes ISW-BC, a method that is theoretically proved to be better, and indeed achieves superior performance over multiple baselines (DWBC, DemoDICE) on multiple environments (Atari, mujoco, and even non-RL tasks). Weaknesses: **1. The proposed method, ISW-BC, might be non-robust to stochastic environment.** Consider a simple tabular MDP with five states $s_{begin}, s_1, s_2, s_{success}, s_{fail}$; the agent always begin at $s_{begin}$, and there is only one action for $s_{begin}$, which has 50% probability of leading to $s_1$ and 50% probability of leading to $s_2$; there is only one action for $s_1$ that 100% leads to $s_{success}$, and two actions for $s_2$ that 100% leads to $s_{success}$ and $s_{fail}$ respectively; $s_{success}$ is the success state that terminates the episode with $+1$ reward; $s_{fail}$ terminates the episode with $-1$ reward. Now, consider the scenario where we only have one expert trajectory (1-shot is a common case) that goes from $s_{begin}$ to $s_1$ and finally $s_{success}$. The supplementary data acts uniformly random. By definition, the discriminator now gives a close-to-zero ratio for $d_h^E/d_h^U$ with any history that involves $s_2$ (let us ignore the numerical stability issue for now because there are engineering solutions). DICE methods have an equivalence of value function (which is the Lagrange dual function), which can guide the agent back from $s_2$ to $s_{success}$. ISW-BC, however, append no or very little weight to supplementary data on $s_2$, and, because expert has never experienced $s_2$, does not know what to do on $s_2$. **2. It strikes me as a little strange how we "justify" the use of weight in imitation with imperfect data from theoretical analysis**, because the story of the paper seems to be improve over NBCU (later proved to be empirically better than DICE/DWBC), but the NBCU analyzed in the paper is too unintuitive to ever work; it is natural, rather than with theoretical analysis for one to know that (s)he cannot treat non-expert data that is arbitrarily bad as expert ones. **(Despite of this, I am still convinced that the theoretical analysis on NBCU is a notable contribution.)** **3. Other minor problems:** a) Besides the empirical advance on offline IL mentioned in the paper, there are more theoretical advances in offline IL (more specifically, the unification of offline IL and RL), which are MAHALO [1] and ReCOIL [2]. I encourage the author to briefly discuss them in the related work section. b) The color of the curve for each method should be unified throughout the paper. For example, the color of ISW-BC in Fig.9 and Fig.10 in the appendix are not unified and might mislead the readers. **References:** [1] A. Li et al. MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations. In ICML, 2023. [2] H. S. Sikchi et al. Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods. In ArXiv, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have one question: can we actually comprehend ISW-BC as reward-weighted regression [1, 2] with $\gamma=0$ and reward $r(s,a)=\log\frac{d^E(s,a)}{d^U(s,a)}$? If so, then I think more discussion is needed on the relationship between ISW-BC and RWR [1] / AWR [2]. The suggestions for the authors are listed as follows. I am open to increase my score if the authors can improve their paper as suggested: 1. Modify the paper as suggested in the weaknesses (discussion of point 1, modification with point 2, 3) and limitation section; 2. It would be great for the authors to try the supplementary data consists of some expert trajectories and many random trajectories or medium-level trajectories, like those tested in [3]. The component of the supplementary data can largely affect the performance of the algorithm [3]; 3. I strongly recommend the authors to open-source their code upon acceptance (or next submission). **References:** [1] Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In ICML, 2007. [2] Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning. X. Peng et al. in ArXiv, 2019. [3] H. S. Sikchi et al. Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods. In ArXiv, 2023. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: **Limitations:** The authors have discussed the theoretical limitations in line 280 and line 746-750. I think, however, there are more limitations that the authors could consider to add, and concentrated in a single limitation section: 1) non-robustness to stochasticity (elaborated in point 1 of the weakness section); 2) the assumption that $d^U$ covers $d^E$, which is also a weakness that DICE possesses but still a concern in practical use. Note these limitations do not necessarily mean that the work is not valuable enough for the conference; however, they do pose concern for readers who want to apply ISW-BC in the future. **Potential Negative Societal Impact:** The paper does a good job in discussing the broader impact at the beginning of the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper, and for your constructive comments. **Comment 1:** The proposed method, ISW-BC, might be non-robust to stochastic environment. **Response 1:** Thank you for providing the example and initiating the discussion. However, we would like to clarify that in theory, ISW-BC can work in stochastic environments (we do not pose a deterministic transition assumption). Moreover, in practice, we have considered non-deterministic tasks and showed that ISW-BC works well. Regarding your concern with the provided example, we clarify that this failure mode is associated with Proposition 2, where the main issue lies in the tabular representations rather than the stochasticity itself. However, we have shown that when feature design is well-crafted, ISW-BC can effectively avoid this failure mode, as demonstrated in our Theorem 3. Regarding your mention of DICE methods, we would appreciate further clarification on which specific DICE method you are referring to and how it manages to work with just one expert trajectory. If you are referring to the DemoDICE method, we would like to emphasize that it cannot recover the expert policy even in the population level (see Appendix B for more details). **Comment 2:** It strikes me as a little strange how we "justify" the use of weight in imitation with imperfect data from theoretical analysis. **Response 2:** We appreciate your feedback, and we would like to provide further clarification to address your concern. Our theoretical analysis indeed sheds new light on the justification for using weights in imitation with imperfect data. While it is intuitive to believe that treating non-expert data, which can be arbitrarily bad, as expert data would not work in the worst-case scenario, our theory reveals that this naive idea may still perform well in certain cases. Our theory introduces the concept of state-action/policy distribution shifts, as discussed in Remark 1, and provides a characterization of when the naive approach fails and when it can be effective. We use experiments to illustrate both the good and bad cases for NBCU. Then our paper attempts to introduce the use of importance sampling to improve NBCU in both cases. **Comment 3:** Besides the empirical advance on offline IL mentioned in the paper, there are more theoretical advances in offline IL, which are MAHALO [1] and ReCOIL [2]. **Response 3:** Thanks for pointing out these references. We provide a short discussion below, which will be involved in the revised paper. Similar to MILO (citied as [7] in our paper), MAHALO [1] analyzes the suboptimality of MAHALO which follows the pessimism principle in offline RL, while we study BC and its variants. ReCOIL[2] optimizes the state-action distribution matching objective with new duality techniques and presents a new energy-based model viewpoint. In contrast, ISW-BC leverages the importance sampling technique, and we provide a new analysis for it. **Comment 4:** The color of the curve for each method should be unified throughout the paper. **Response 4:** We will ensure that the color format is unified to avoid any confusion in the revised version. **Comment 5:** I have one question: can we actually comprehend ISW-BC as reward-weighted regression [1, 2] with $\gamma=0$ and reward $r(s, a) = \log \frac{d^{\operatorname{E}}(s, a)}{ d^{\operatorname{U}}(s, a)}$. **Response 5**: Yes, the training objective of ISW-BC can be comprehended as reward-weighted regression (RWR) with $\gamma=0$ and reward $r(s, a) = \log \frac{d^{\operatorname{E}}(s, a)}{ d^{\operatorname{U}}(s, a)}$. This viewpoint is intriguing, but we would like to highlight the following differences: - Papers on RWR mainly consider the online setting, while we focus on the offline setting. - Papers on RWR are applicable in the RL setting where the reward is readily available, whereas in our imitation learning setting, we need to infer the reward (or the importance sampling ratio). **Comment 6:** Modify the paper as suggested in the weaknesses and limitation section. **Response 6:** Your suggestions are very helpful and we will revise the paper as suggested. **Comment 7:** It would be great for the authors to try the supplementary data consists of some expert trajectories and many random trajectories or medium-level trajectories, like those tested in [3]. **Response 7:** Thanks for pointing out that supplementary data distribution is important for algorithm performance, which is consistent with our claim in discussing NBCU. We have experimented that supplementary data consists of 10 expert trajectories and 10 random trajectories in the locomotion control benchmark. The results show that ISW-BC outperforms BC by a wide margin, demonstrating the robustness of ISW-BC to distribution shift. Furthermore, ISW-BC also performs better than other baselines DemoDICE and DWBC. The detailed results are available in Table 1 in a separated pdf file. We are also planning to conduct more experiments with medium-level trajectories or in Atari games and replenish these results in the future revision. **Comment 8:** I strongly recommend the authors to open-source their code upon acceptance (or next submission). **Response 8:** As promised in the Appendix, we will make our code and datasets available for public access upon acceptance. Currently, in accordance with the NeurIPS instructions, we have provided the code to the area chair via an anonymized link. **Comment 9:** The assumption that $d^{\operatorname{U}}$ covers $d^{\operatorname{E}}$, which is also a weakness that DICE posses but still a concern in practical use. **Response 9:** We appreciate your feedback, but we would like to clarify that this assumption directly holds because the union data includes the expert data. --- We hope that the above response can address your concerns adequately. We would greatly appreciate it if you could re-evaluate our paper based on the above responses. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for your detailed response. Generally, I think most of my question raised are well-addressed. However, I have two follow-up comments to the response: 1. Regarding response 1: while I agree that DICE methods such as DemoDICE cannot retrieve the policy with minimal occupancy divergence to the expert policy, my argument is that at the policy retrieval step, DemoDICE uses the exponential of (scaled) advantage, which will give more weight toward the action leading to $s_{success}$ as long as the value function (dual variable) learned for $s_{success}$ is higher than that for $s_{fail}$. Thus, though DemoDICE does not accurately retreive expert policy, with higher probability it will make the right choice on $s_2$. 2. Regarding response 5: There are works that are similar to RWR/AWR but considers offline scenario, e.g. MARWIL [1]. The use of advantage / return-based weighted regression is common in the RL/IL community [2, 3]. **References:** [1] Q. Wang et al. Exponentially Weighted Imitation Learning for Batched Historical Data. In NeurIPS, 2018. [2] Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., & Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920. [3] Wang, Z., Novikov, A., Zolna, K., Merel, J. S., Springenberg, J. T., Reed, S. E., ... & de Freitas, N. (2020). Critic regularized regression. Advances in Neural Information Processing Systems, 33, 7768-7778. --- Reply to Comment 1.1.1: Title: Thanks for Your Helpful Comments Comment: We appreciate your prompt response! **Response 1:** Your insightful discussion is greatly appreciated. We agree with your viewpoint that the DICE method could indeed make a correct decision for $s_2$, if we ignore the bias concern within DemoDICE. We intend to incorporate this valuable discourse into our revised paper. **Response 2:** We extend our gratitude for highlighting these references to us. It is important to note that the methods outlined in the cited works necessitate access to accurate environment rewards, thus rendering them unsuitable for direct application within the context of our paper. Nonetheless, we acknowledge the significance of these works and intend to include a comprehensive discussion of them in our revised paper.
Summary: The paper provides derivations of the imitation gap, the gap in performance between the trained agent and expert who provided the data, when traditional behavioural cloning (BC) is used. The specific setting assumes there is plentiful of supplementary data to train the BC agent on, but since this supplementary data may (and probably is) of poor quality (i.e., not as good as expert demonstrations), traditional BC produces less than optimal agents. The paper proposes a importance-sampling correction by training a discriminator on the dataset (to distinguish the high quality and low quality demonstrations), and then train BC using the importance-sampling correction. The results indicate the proposed method improves over BC and existing methods, while not reducing original BC performance in settings with high amount of expert data. Strengths: - Theoretical backing and derivation of the method. - Both theoretical and empirical improvements over baselines, and proposed method is more applicable than baselines (e.g., no natural extension of DemoDICE to image recognition task, as it lacks rewards). - Empirical results in three different settings (MuJoCo, Atari and object recognition). - Method is simple to implement. With a good baseline code base shared, I could see other people adapting this method and trying it out. Weaknesses: - Proposed method has rather small/noisy improvements in terms of metrics in the experiments. - In "noisy expert" setting, the proposed method is clearly better than the baselines. However this setting seems rather unrealistic (proper trajectories from a policy but actions are random). A more realistic scenario would be rollouts from a poorly trained policy, or a random agent. - Not a very novel setting (as evident by the number of baselines) and the solution, while well executed, is a combination of existing works in somewhat simple way. - Training discriminators may be problematic (which is a shared difficulty with baselines). Authors note this in the Appendix for the Atari experiments. - (Minor) No code available, but paper lists references to libraries and datasets used. Nevertheless, replicating the results as presented in the paper will be near-impossible, given the earlier works in ML field. I urge authors to share the code, even if it is "messy", so that others can build on the contributions of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) What is the "BC" method shown in Tables 2, 3 and 4? Is this just "BC" row trained on the expert dataset only? If so, please clarify in main paper text. 2) The "one expert trajectory in expert dataset" setup throughout experiments seems overly restrictive. Have you experiment with other settings, like with 5 or 10 expert trajectories in the expert dataset? Was it ensured that the "one expert trajectory" in the expert dataset was indeed a good demonstration? Even with a good policy, some trajectories may end up being bad. 3) In the object recognition task, the supplementary data is data with valid labels but with a different style. However, the focus is on "sub-optimal policy" data, and in other environments the supplementary data consists of different policies ("full dataset" setting) and outright wrong labels ("noisy expert"), where some actions are replaced with random ones. Following this, it seems like a more natural setup for object recognition task would be to do everything in single domain (e.g., "Real"), but randomly replace the sample label with a wrong label in the supplementary dataset. What is the reason behind using different sub-sets as supplementary data? ### Comments Note: I do not have the expertise to comment on the mathematical derivations. I work on the assumption these are correct/throughout, and comment on the experimental setup and results. - Footnotes come after commas - Term "imitation gap" is used early on (e.g., lines 39 and 66), but not formally or informally defined. A quick definition after first use of the term would help readability. - Not required, but in future, please use same colors consistently between figures (e.g., same method always gets the same color). This improves paper readability quite considerably. - Section 3 header sounds like it is missing something (e.g., should it be "Preliminaries" or "Problem setup"?) Numbers in the tables are in bit different format; some (Random, Expert) seem to be in normal text mode, while rest are in math mode. Formatting should be consistent. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors have broader impact section, and correctly report the limited societal impact. Authors list some of the limitations of the method in Appendix, e.g., difficulties regarding training discriminator in the Atari domain. ## Rebuttal acknowledgement I have read authors' rebuttal and new results which did address my concerns, and I updated my score from 6 to 7 (before discussion period closed). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and providing us with your valuable feedback. **Comment 1:** In "noisy expert" setting, the proposed method is clearly better than the baselines. However this setting seems rather unrealistic (proper trajectories from a policy but actions are random). A more realistic scenario would be rollouts from a poorly trained policy, or a random agent. **Response 1:** We appreciate your concern and would like to provide two solutions to address it. On the one hand, we believe that the considered noisy expert setting is practical in some applications with potential data corruption. For example, although the human expert demonstrates an optimal trajectory, the recorder or the recording system possibly corrupts the data by accident or on purpose. Motivated by such applications, the study on the robustness to data corruption has drawn much research interest in imitation learning [R1, R2]. To further address your concern about the rollouts from non-expert policy, we have experimented the case where the supplementary dataset contains expert trajectories and trajectories collected by a random agent in the locomotion control benchmark. The experimental results show that ISW-BC outperforms NBCU significantly, implying the robustness of ISW-BC to distribution shift. Besides, ISW-BC also performs better than other baselines DemoDICE and DWBC. The detailed results are available in Table 1 in a separated pdf file. We are also planning to conduct more experiments in Atari games and include these results in the future version. References: [R1] Liu Liu, et al. “Robust Imitation Learning from Corrupted Demonstrations.” arXiv: 2201.12594 [R2] Fumihiro Sasaki, Ryota Yamashina. “Behavioral Cloning from Noisy Demonstrations." ICLR 2021. **Comment 2:** No code available. **Response 2:** As promised in the Appendix, we will make our code and datasets available for public access upon acceptance. Currently, in accordance with the NeurIPS instructions, we have provided the code to the area chair via an anonymized link. **Question 3:** What is the "BC" method shown in Tables 2, 3 and 4? Is this just "BC" row trained on the expert dataset only? **Response 3:** Yes, you are correct. The "BC" method shown in Tables 2, 3, and 4 refers to the row labeled "BC," which is trained solely on the expert dataset. We apologize for any confusion, and we will make sure to clarify this point in the later revision of the paper. **Question 4:** The "one expert trajectory in expert dataset" setup throughout experiments seems overly restrictive. Have you experiment with other settings, like with 5 or 10 expert trajectories in the expert dataset? Was it ensured that the "one expert trajectory" in the expert dataset was indeed a good demonstration? Even with a good policy, some trajectories may end up being bad. **Response 4:** We appreciate your concern. Indeed, for the locomotion control tasks, we have conducted experiments with 5 expert trajectories and observed that BC (Behavioral Cloning) trained solely on the expert dataset performs exceptionally well. Consequently, in this case, there appears to be no significant advantage in using supplementary data. Note that one expert trajectory is a common choice in the existing literature. As for your concern that some trajectories may end up being bad, we notice that the collected expert trajectories are all in high quality for this benchmark. For the other tasks (e.g., Atari games and object recognization), we do consider the set-up of more than single demonstration. **Question 5:** In the object recognition task, the supplementary data is data with valid labels but with a different style. However, the focus is on "sub-optimal policy" data, and in other environments the supplementary data consists of different policies ("full dataset" setting) and outright wrong labels ("noisy expert"), where some actions are replaced with random ones. Following this, it seems like a more natural setup for object recognition task would be to do everything in single domain (e.g., "Real"), but randomly replace the sample label with a wrong label in the supplementary dataset. What is the reason behind using different sub-sets as supplementary data? **Response 5:** We utilize distinct subsets as supplementary data, as this approach is a common practice within this benchmark for examining the robustness of the learning method against distribution shifts. To establish a connection between this setup and our formulation, one might consider that each domain corresponds to a unique sub-state space. Nonetheless, in line with your insights, we have conducted a new experiment. In this experiment, each supplementary dataset is situated within the same domain as the expert dataset, and certain labels within the supplementary dataset have been deliberately subjected to noise injection. For comprehensive details, please refer to Table 2 in the separate PDF file. Empirical results demonstrate that ISW-BC outperforms baseline methods and maintains its robustness even within this particular scenario. **Comment 6:** Footnotes come after commas. Term "imitation gap" is used early on (e.g., lines 39 and 66), but not formally or informally defined. Please use same colors consistently between figures. Section 3 header sounds like it is missing something (e.g., should it be "Preliminaries" or "Problem setup"?) Numbers in the tables are in bit different format. **Response 6:** Thank you for bringing these issues to our attention. We will make the corresponding corrections and improvements based on your suggestions. --- We hope that the above answers can address your concerns satisfactorily and improve the clarity of our contribution. We would be grateful if you could re-evaluate our work based on the above responses. --- Rebuttal Comment 1.1: Comment: Thank you for your replies and additional experiments. Indeed the results look promising for the additional experiments you ran. It is reassuring to see that the method works with random agent data as well, which is easy to come by, compared to expert trajectories with random noise labels. Given the replies, I am happy to increase my score from 6 to 7. I encourage authors to take care to open-source the exact code to replicate results; not only it is good research to do so, but it will drive up the attention this work will get, as people are able to base their results on your code and use it as baselines in the future. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your insightful comments and feedback. We will revise the paper as suggested and open-source the code and datasets. We are delighted to learn that our responses have addressed your concerns, and we express our deep appreciation for your reconsideration of the score.
Summary: This paper studies the problem of offline imitation learning (IL) with a supplementary dataset, which can address the scarce expert data issue in pure IL. In this setting, the challenge is that the supplementary dataset may have out-of-distribution samples. This paper considers the classical method Behavioral Cloning (BC) and its variants, and proves their imitation gap bounds in offline IL with a supplementary dataset. The theoretical results show that the naïve BC on union dataset (NBCU) method suffers a non-vanishing gap, and thus may be worse than BC which only learns from the expert dataset. To address this issue, the authors propose the method Importance-sampling weighted BC (ISW-BC), which can select in-distribution samples in supplementary dataset. They prove that ISW-BC can eliminate the gap in NBCU. The experimental results also show that ISW-BC outperforms existing methods on a variety of tasks. Strengths: 1. This paper conducts a systematic theoretical study of offline IL with a supplementary dataset. The developed theory closes the gap between theory and practice and lays a foundation for further studies of this problem. 2. This paper proposes a simple and effective method ISW-BC. The authors validate that ISW-BC can address the distribution shift issue in both theory and practice, which makes advances over existing methods. Weaknesses: 1. This paper is a bit dense to read. I believe that this paper would benefit from providing more intuitions and proof sketch for the theoretical results. Besides, the authors should give more analysis of the experimental results, which can give the reader an intuitive idea about how and where the proposed algorithm improves upon existing methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The theoretical results for ISW-BC in lines 260-270 are quite complicated and difficult to understand. Can the authors give more explanations and proof sketch for these results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations and broader impacts of this paper in the conclusion part and appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time to review and provide positive feedback for our work. **Comment 1:** This paper is a bit dense to read. I believe that this paper would benefit from providing more intuitions and proof sketch for the theoretical results. Besides, the authors should give more analysis of the experimental results, which can give the reader an intuitive idea about how and where the proposed algorithm improves upon existing methods. **Response 1:** Thanks for your helpful suggestions. We will revise our paper based on your comments. **Question 2:** The theoretical results for ISW-BC in lines 260-270 are quite complicated and difficult to understand. Can the authors give more explanations and proof sketch for these results? **Response 2:** Thanks for your helpful comments. We provide a proof sketch below for your reference and we will put it in the revised paper. First, based on a classical reduction lemma, we can upper bound the imitation gap by the divergence between the expert policy and the learned policy distributions. $$ V\left(\pi^{\mathrm{E}}\right) - V\left(\pi^{\mathrm{ISW}-\mathrm{BC}}\right) \leq H \sum\_{h=1}^H \mathbb{E}\_{s \sim d\_h^{\mathrm{E}}(\cdot)}\left[\mathrm{TV}\left(\pi\_h^{\mathrm{E}}(\cdot \mid s), \pi\_h^{\mathrm{ISW}-\mathrm{BC}}(\cdot \mid s)\right)\right] $$ Then our target is to analyze the properties of the learned policy distribution. To achieve this goal, we derive the closed-form solution for the learned policy, on which the learned weights of samples play a crucial role. $$ \pi\_h^{\mathrm{ISW}-\mathrm{BC}}(a \mid s)=\frac{\widehat{d\_h^{\mathrm{U}}}(s, a) w\_h(s, a) \mathbb{I}\left[w\_h(s, a) \geq \delta\right]}{\sum\_{a \in \mathcal{A}} \widehat{d\_h^{\mathrm{U}}}(s, a) w\_h(s, a) \mathbb{I}\left[w\_h(s, a) \geq \delta\right]} $$ Then we continue to analyze the properties of the learned classifier which induces the weights of samples. Through a landscape-based analysis (Lemma 1 and Lemma 2), we prove that the learned classifier induces the consistent margin with the ground-truth classifier ($\Delta\_h (\theta^\star\_h) > 0$) and thus can distinguish between in-expert-distribution samples $\mathcal{D}^{\text{E}}\_h \cup \mathcal{D}^{\text{S}, 1}\_h$ and out-expert-distribution samples $\mathcal{D}^{\text{S}, 2}\_h$. With this result, we further show that the learned policy matches the expert policy on states within in-expert-distribution samples $\mathcal{D}^{\text{E}}\_h \cup \mathcal{D}^{\text{S}, 1}\_h$. Finally, we can obtain the improved imitation gap bound. Please let us know if you have further concerns about this point.
Summary: In the paper, the authors focus on imitation learning (IL) when working with supplementary imperfect data. They conduct a thorough theoretical analysis to understand the limitations of IL under various dataset compositions. The authors' theoretical analysis provides insights into the bounds and constraints of IL when dealing with different types of datasets. To address this problem, they propose a novel method called importance-sampling behavior cloning ISW-BC. The proposed method is designed to mitigate the issues associated with imperfect data in IL. This technique leverages importance sampling to assign appropriate weights to different samples, thereby effectively reducing the impact of imperfections in the training data. To validate the effectiveness of their approach, the authors conduct extensive evaluations on a diverse set of tasks. The results indicate that the proposed method outperforms the current state-of-the-art techniques in most cases. This suggests that the importance-sampling behavior cloning method is a promising solution for tackling the problem of imitation learning with supplementary imperfect data. Strengths: One strength of the paper is the authors' meticulous exploration of the various theoretical bounds that arise when working with imperfect data within the framework of BC. By dissecting these limitations, they provide a deep understanding of the challenges faced in practice, enabling researchers and practitioners to make more informed decisions when applying BC to real-world datasets. Furthermore, the authors introduce a novel method based on importance sampling, which offers a clear and intuitive approach for addressing the imperfections in the data. In addition to their theoretical contributions, the authors demonstrate the practical relevance of their proposed method by conducting a thorough analysis on diverse tasks. This empirical evaluation validates the effectiveness of their approach across various application domains, further strengthening the paper's findings. Weaknesses: The authors' analysis lacks consideration of alternative methods that can effectively learn from imperfect data. [1] In order for the method to be applied, a dataset of labeled expert demonstrations is required. In many practical applications, we do not have access to this information. [1]Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations,Daniel S. Brown, Wonjoon Goo and Scott Niekum, CoRL 2019 Technical Quality: 3 good Clarity: 3 good Questions for Authors: In line 310, authors claim that NBCU performs worse than BC. This would be the expected results, however in Table 2 for Ant and HalfCheetah NBCU with Noisy Expert data performs significantly better than BC. Can the authors explain this in more detail? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review and check our paper, and for your insightful comments. **Comment 1:** The authors' analysis lacks consideration of alternative methods that can effectively learn from imperfect data. [1] **Response 1:** Thank you for bringing up the work [1], which utilizes automatically generated ranked demonstrations to learn a reward function for imitation. However, unlike our analysis, [1] primarily focuses on the online setting, and its main theorem contributes significantly by enabling the learning of a policy better than the demonstrator using the recovered reward function. It is worth noting that [1] requires a different assumption, namely that the expert policy is sufficiently sub-optimal. We will address this point in the revised paper by including the above discussion. **Comment 2:** In order for the method to be applied, a dataset of labeled expert demonstrations is required. In many practical applications, we do not have access to this information. **Response 2:** We appreciate your concern and would like to address it with two potential solutions. First, while obtaining a *complete* dataset of labeled expert demonstrations may be challenging in some practical scenarios, we believe it can be feasible and cost-effective to acquire *a few* labeled expert demonstrations (e.g., at least one expert trajectory in the locomotion control tasks considered in our experiments). Alternatively, if obtaining labeled expert demonstrations proves truly unfeasible, we propose a two-stage solution. Let us consider a mixed dataset containing both expert trajectories and non-expert trajectories. We can divide this dataset into two equal parts, denoted as $D_1$ and $D_2$. In the first stage, we infer a representative policy $\widehat{\pi}^{\operatorname{E}}$ from $D_1$ and use this policy to generate labels for the corresponding trajectories in $D_1$. Subsequently, in the second stage, we apply our framework, considering $D_2$ as supplementary data. Our intuition here is that expert trajectories likely constitute the majority of the entire dataset, so the policy $\widehat{\pi}^{\operatorname{E}}$ recovered from the noisy data $D_1$ can effectively act as a proxy for the expert policy. **Question 3:** In line 310, authors claim that NBCU performs worse than BC. This would be the expected results, however in Table 2 for Ant and HalfCheetah NBCU with Noisy Expert data performs significantly better than BC. Can the authors explain this in more detail? **Response 3:** Thanks for the nice observation and we have carefully examined this case. Through visualization (via PCA), we found that the state coverage (between the expert data and noisy non-expert data) is relatively nice for Ant and HalfCheetah tasks (compared with the other two tasks). Therefore, NBCU performs relatively well on these two tasks. For your reference, we have included the visualization plots in Figure 1 of a separate PDF file. --- We hope that the above answers can address your concerns satisfactorily. We would be grateful if you could re-evaluate our paper based on the above responses. We look forward to receiving your further feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. The provided responses adequately address my concerns and answer my questions, therefore I will increase my score to Weak Accept. The response that the authors provided to *Question 3* should be included in the main paper, and the strong claim that the authors make in line 310 should be adjusted accordingly. --- Reply to Comment 1.1.1: Comment: Your valuable comments and feedback are deeply appreciated. We are pleased to know that our responses addressed your concerns, and we extend our gratitude for your kind reconsideration of the score. We are committed to integrating your suggestions as we revise the paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their expertise and efforts in reviewing our paper. We have responsed to each review seperately. We hope that our response can address the concerns well. Furthermore, we look forward to any additional comments or suggestions for improvement. Please take note that we have attached a separate PDF file containing new experimental results aimed at addressing the concerns from Reviewers Yb9D, Y1mx, and FJAh. Best, The Authors Pdf: /pdf/f2200bb7494d6542e8e65fc5781d19b707d6ae99.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Suggesting Variable Order for Cylindrical Algebraic Decomposition via Reinforcement Learning
Accept (poster)
Summary: Cylindrical algebraic decomposition (CAD) is a technique to decompose a multi-dimensional space into a finite number of cells. CAD cells are built to respect a set of polynomial constraints such that each constraint has constant truth value in each cell. As stated by the authors, the variable ordering in the CAD algorithm significantly influences the computational time, memory usage, and the number of cells. The authors used Reinforcement Learning based approaches to choose an optimal variable ordering. The authors represented a polynomial set in a graph, in addition to the embeddings formed with different indicators of each polynomial (e.g. degree statistics ), and used the number of obtained cells as a quality measure of the result. The authors used the Advantage Actor-Critic (A2C) framework to improve CAD by suggesting a better variable order. The actor neural network was represented by Graph Neural Network as they are permutation invariant. Strengths: - Reinforcement Learning methods are widely adopted to optimize non-differentiable objectives, and the authors demonstrate its potential in choosing the variable order for CAD algorithms . - The experimental results demonstrate that their method outperforms the baseline. Weaknesses: The paper lacks some ablation studies. - The advantage of using a GNN and a graph representation of polynomial sets: The authors mentioned that GNNs are invariant to permutation, the results should indeed be independent from the order of the polynomial in the set, but there are simpler Neural Network that are invariant to permutation. Can we empirically check how much a GNN enhances the performance of GRL-SVO? To check this, the authors can for instance replace the GNN with an MLP performed on each polynomial embedding and then do a mean aggregation to have a learnable representation of the whole polynomial set. - I would also encourage authors to test different GNN architectures (e.g. GAT, GATv2, GraphSage ...) for the ablation study. - What is the effect of normalization factor “M” on the performance and speed of GRL-SVO? - More experiments should be carried out to assess robustness. - Code not shared to verify results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the used architecture of the GNN? The authors mentioned “GraphConv” and cited the paper [26]. I didn’t see how the two are related to each other. If you are referring to the layer “GraphConv” in the DGL library, the adequate name in the paper is “Garph Convolution Network layer [Kipf and Welling 2017]”. - Also, the paper [26] is more a method to enhance the expressiveness of a GNN rather than a GNN architecture. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors mentioned the limitations of their approaches in section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback and valuable comments. ``` Can we empirically check how much a GNN enhances the performance of GRL-SVO? To check this, the authors can for instance replace the GNN with an MLP performed on each polynomial embedding and then do a mean aggregation to have a learnable representation of the whole polynomial set. ``` GRL-SVO(UP/NUP) accepts the set of variable embeddings, and then GNNs encode the variable embedding into intermediate learnable representations. The actor makes decisions based on this information. We think utilizing polynomial embedding seems to be a new task different from our work. We have conducted an experiment on a simple neural network with MLP layers for nodes without edges. The MLP layer will convert variable embeddings (not polynomial embedding) into their learnable representations. It is also invariant to permutation. It is Experiment 2 (NO_EDGE(MLP) in Table 2 and Figure 1 in PDF attachment) in the response for reviewer HXCT. We will append it to the Appendix. The main result is that GRL-SVO can outperform these MLP models with different sizes. It is the advantage of the graph structure where a variable can grasp neighbor information. Combining our work with polynomial embedding for suggesting variable order is also a promising direction, we will investigate it in the future. ``` I would also encourage authors to test different GNN architectures (e.g. GAT, GATv2, GraphSage ...) for the ablation study. ``` We have arranged experiments on different GNN architectures and will append the results to the Appendix. While the experiments are still ongoing, we have included a subset of the complete experiments on GraphSage in Table 3, which can be found in the PDF attachment. ``` What is the effect of normalization factor $M$ on the performance and speed of GRL-SVO? ``` Experiment 6: We make an ablation experiment on $M$ and will append it to the Appendix. We train the models with $M$=10000, 20000, 50000(ours), 100000, and without $M$. Note that if there is no $M$, the reward (the number of cells) will be a relatively large integer. As shown in Table 4, $M$ is necessary. The first number is the result of the validation set, while the second is the result of the testing set. ``` Code not shared to verify results. ``` We have submitted our source code to AC. We will clean and release the source code and dataset for the experiments in this paper on GitHub. ``` What is the used architecture of the GNN? The authors mentioned ''GraphConv'' and cited the paper [26]. I didn’t see how the two are related to each other. If you are referring to the layer ''GraphConv'' in the DGL library, the adequate name in the paper is ''Graph Convolution Network layer [Kipf and Welling 2017]''. Also, the paper [26] is more a method to enhance the expressiveness of a GNN rather than a GNN architecture. ``` We apologize for the confusion and will revise it in the next version. First, we utilize PyTorch Geometric (PyG) for the implementation in Section 4.1, not the DGL library. There is a difference between the names of the two libraries. Second, The operator of GraphConv in PyG is $\mathbf{x}^{\prime} _i = \mathbf{W} _1 \mathbf{x} _i + \mathbf{W} _2 \sum _{j \in \mathcal{N}(i)} e _{j,i} \cdot \mathbf{x} _j$. As there is no edge weight in our models, actually the operator we use is $\mathbf{x}^{\prime} _i = \mathbf{W} _1 \mathbf{x} _i + \mathbf{W} _2 \sum _{j \in \mathcal{N}(i)} \mathbf{x} _j$. Note that a basic GNN model has the (5.7) formula (basic GNN model) in section 5.1.3 of the book ''Graph Representation Learning'' by William L. Hamilton. $$ \mathbf{h}^{(k)} _u = \sigma(\mathbf{W}^{(k)} _{self} \mathbf{h}^{(k-1)} _u + \mathbf{W}^{(k)} _{neigh} \sum _{v \in \mathcal{N}(u)} \mathbf{h}^{(k-1)} _v + b^{(k)}) $$ Our model is a simple instance of a basic GNN model where $\sigma = relu$, $\mathcal{N}$ means the nodes connecting to $u$, and $\mathbf{W}^{(k)} _{self}$ and $\mathbf{W}^{(k)} _{neigh}$ are all learnable parameters. We will revise the citation [26] to (5.7) in ''Graph Representation Learning'' as we used and write down the used GNN formula in the paper. --- Rebuttal Comment 1.1: Comment: With the consent of AC, we provide an anonymous link to the source code: https://anonymous.4open.science/r/GRL-SVO-53C2/. If you have any questions or concerns, we would be delighted to discuss them with you. --- Reply to Comment 1.1.1: Comment: Dear reviewer gZi1, We have conducted more experiments by increasing the number of epochs to 100. However, due to the time-consuming nature of interacting with the symbolic computation tool, the experiments on UP have not yet been completed. On the other hand, the experiments on NUP, running for 100 epochs, have been completed. Here, we update the results, including experiments on MLP, other GNN architectures, and analysis on $M$. Note that the performance of GRL-SVO(NUP) in 100 epoches are #SI = 1772, AVG.T = 94.87, AVG.N = 2166.67. The following table presents the performance of MLP with various sizes. For instance, ''4-256'' indicates a model with 4 layers and a 256-dimensional intermediate representation. It is important to note that ''4-512'' has twice the number of parameters compared to our model. It shows that the positive effect of graph structure on learning. | | 2-256 | 3-256 | 4-256 | 4-512 | | --- | --- | --- | --- | --- | | #SI | 1763 | 1763 | 1764 | 1756 | | AVG.T | 97.89 | 98.43 | 97.68 | 99.08 | | AVG.N | 2132.33 | 2140.18 | 2129.33 | 2148.64 | We test other GNN architectures that accept the same parameters in PyG: - ClusterGCNConv: operator from "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks"; - EGConv: operator from "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions"; - FiLmConv: operator from "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions"; - LEConv: operator modified from "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions"; - GATConv: operator from "Graph Attention Networks"; - GATv2Conv: operator from "How Attentive are Graph Attention Networks?"; - GeneralConv: operator from "Design Space for Graph Neural Networks"; - ResGatedGraphConv: operator from "Residual Gated Graph ConvNets"; - SageConv: operator from "Inductive Representation Learning on Large Graphs"; - TransformerConv: operator from "Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification". | | ClusterGCNConv | EGConv | FiLmConv | LEConv | GATConv | GATv2Conv | GeneralConv | | --- | --- | --- | --- | --- | --- | --- | --- | | #SI | 1767 | 1765 | 1760 | 1761 | 1726 | 1762 | 1768 | | AVG.T | 94.26 | 98.53 | 96.49 | 96.76 | 119.72 | 104.14 | 95.17 | | AVG.N | 2144.66 | 2177.32 | 2136.67 | 2151.75 | 2166.78 | 2184.86 | 2132.81 | ||ResGatedGraphConv | SageConv | TransformerConv | | --- | --- | --- | --- | | #SI | 1759 | 1765 | 1769 | | AVG.T | 96.25 | 96.74 | 94.45 | | AVG.N | 2146.59 | 2131.22 | 2134.51 | We have observed that each graph neural network (GNN) is capable of effectively learning this problem. Our approach is not reliant on the specific network structure. From the experiments of $M$, we find that $M$ is necessary. $M$ only affects training process. In the training set, the maximum number of cells for 3-variable instances is 18801. Therefore, selecting $M$ close to or greater than 18801 is recommended. When reward is normalized to a relatively small value, it can improve the stability of training and help to converge faster. However, using NO_M or setting $M=1000$ leads to a deteriorating model performance over the training epochs. On the other hand, setting $M=10000, 50000, 100000$ results in a normal training process. In actual situations, when $M$ increases, the index of epoch in which the optimal model appears is also delayed. $M=50000$ has the smallest variance of #SI on validation set among these settings. | | 10000 | 50000 | 100000 | | --- | --- | --- | --- | | index of epoch of optimal model | 37 | 64 | 93 | | variance on validation set | 94.19 | 24.52 | 30.99 |
Summary: Given a curve y = x^2, it partitions the x-y plane into three sets where the sign y-x^2 is the same. On the curve y-x^2 it is zero, above the curve y = x^2, sign is positive and below the curve it is negative. Thus the polynomial has 3 cells - regions where the sign is invariant. Given a polynomial with n-variables there is a mathematical procedure (project, root-isolate and lift) to identify cells. The mathematical procedure may results in different cells depending upon on how the variables are ordered. The objective is to choose a variable ordering in order to minimize the number of cells generated by the mathematical procedure. The contribution of the paper is to formulate a graph representation of the problem (the state), the action is the variable permutation and the reward is minimizing the number of cells. The REINFORCE algorithm is used. Strengths: 1. It is nice to see that RL is being used for such problems. It is a nice and natural fit. 2. The model is trained on 3-variable set but generalizes to upto 9 variable set. Weaknesses: 1. The novelty is on setting up the problem. There is not much RL novelty. 2. REINFORCE has high variance. There should be more analysis of the impact of the algorithm on the solution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The performance improves very quickly according to Figure 4. Is there an explanation. How stable are the runs of REINFORCE ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. ``` The performance improves very quickly according to Figure 4. Is there an explanation. How stable are the runs of REINFORCE ? ``` As an example, the brown heuristic (EB & NUP in Section 2.2) only utilizes three statistical features: degree, total degree, and occurrence to distinguish the importance of variables, where it can also achieve a good result. GRL-SVO(UP/NUP) focus on a superset of such heuristics. They have a greater probability to construct a relatively effective heuristic by organizing these features, and 3-var instances are relatively small and suitable for learning. Therefore, it is reasonable that the models have a quick improvement in the first few epochs. Besides, due to the quick improvement caused by the first epoch, the subsequent variations of metrics become very hard to see in Figure 4. We have redrawn the plots starting from the second epoch, referring to Figure 3 in the PDF attachment. It can be observed that REINFORCE exhibits some instability. --- Rebuttal 2: Comment: I have read all the comments by the authors and other reviewers and am satisfied by the response. I will main my original rating --- Rebuttal Comment 2.1: Comment: Dear Reviewer Ydm9, We appreciate your thorough review and are delighted that our responses have satisfied your queries and concerns. Best regards, Authors
Summary: This work proposes a reinforcement learning based method for the selection of a more efficient variable order for the downstream CAD(cylindrical algebraic decomposition) task. The objective is to minimize the number of cells, a suitable metric that intuitively reflects CAD efficiency. The proposed GRL-SVP(UP/NUP) utilizes the inductive biases of GNNs to learn the relationships among the variables in the polynomial set and will output a variable order for CAD via the Actor-Critic algorithm. Summary: The main novelty lies in the utilization of RL and GNN for the problem of suggesting variable order for CAD and brings the benefit of better empirical performance and generalization. It contributes to the field as the first work to try RL method for this task and it can be further improved by exploring how to encode polynomial coefficients as edge information, boost prediction time, etc. The inclusion of an interpretive analysis outlining the reasoning behind the variable order proposed by the RL approach would enrich the paper's contribution and reader's understanding. Strengths: Pros: 1. It effectively reframes the polynomial set as an associated graph, thereby capitalizing on the inherent advantages of GNNs such as permutation invariance and sparse input awareness. This approach may potentially unveil complex variable interrelationships that could otherwise be overlooked by traditional handcrafted heuristics. 2. This study is the first to utilize reinforcement learning to treat this task as a reinforcement problem. 3. Experimental results show compelling evidence for this method when compared to other heuristic methods in the past. And this approach exhibits good generalization when scaled to higher variable problems. 4. The paper seems to be well-structured and provides sufficient detail about the experiments. Weaknesses: Cons: 1. The associated graph representation may not fully capture all information about the polynomials such as the coefficients. It only encodes relationships between variables. This could be a limitation as the coefficients in a polynomial do carry significant mathematical information. 2. The proposed approach doesn't significantly enhance the inference time compared to existing heuristics as in Figure 4c. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could you clarify the role of polynomial coefficients within your proposed graph representation? Specifically, are these coefficients incorporated into the graph structure or the node embeddings? Furthermore, what is your perspective on the potential importance of these coefficients for determining optimal variable orderings? 2. Could you elaborate on how GRL-SVO(UP) and GRL-SVO(NUP) are complementary to each other? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, the author has adequately addressed limitations in the designated section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. ``` Could you clarify the role of polynomial coefficients within your proposed graph representation? Specifically, are these coefficients incorporated into the graph structure or the node embeddings? Furthermore, what is your perspective on the potential importance of these coefficients for determining optimal variable orderings? ``` Indeed, with regard to the choice of variable order, the current works do not consider coefficients (both Expert-Based or Learning-Based heuristics). Some experts in Symbolic Computation believe that the reason may be related to the calculation of CAD projection: CAD projection uses two polynomials to make a resultant (definition 1 in the Appendix) to eliminate a common variable, so it first depends on the common variable set of these two polynomials (which does not involve coefficients); secondly, the amount of calculation generally depends on the degree of public variables, because the degree determines the size of the resultant matrix. For practical instances, the variation range of the coefficient is uncontrollable, which will increase the difficulty of Learning-Based design and training the neural network. The impact of coefficients on the problem is complex. We are trying to provide some intuition through Figure 2 in the PDF attachment. - (a): $\{ x^3y+4x^2+xy, -x^2+2xy-1 \}, x \prec y: 13, y \prec x: 89$; - (b): $\{ x^3y+8x^2+xy, -x^2+2xy-1 \}, x \prec y: 13, y \prec x: 89$; - (c): $\{ x^3y-4x^2+xy, -x^2+2xy-1 \}, x \prec y: 45, y \prec x: 125$; - (d): $\{ x^3y+4x^2+xy, x^2+2xy-1 \}, x \prec y: 29, y \prec x: 101$; - (e): $\{ -x^3y+4x^2+xy, -x^2+2xy-1 \}, x \prec y: 45, y \prec x: 97$. The number of cells of (a) and (b) are the same, while (c), (d), and (e) are different. But the best variable order is the same ($x \prec y$) in these cases. To a certain extent in these cases, the coefficient mostly affects the number of cells. Experiment 4: We conduct an experiment on the coefficient and will append it to the Appendix. We have randomly modified the coefficient (in [-100, 100]) of 1000 instances randomly selected from the 3-var testing set. Since the coefficients were the only modification made, we used the original variable order generated from the unaltered instances. Our models continue to outperform other heuristics, as demonstrated by the results obtained. | | brown | triangular | EMLP | sotd | ndrr | gmods | GRL-SVO(NUP) | GRL-SVO(UP)| | --- | --- | --- | --- | --- | --- | --- | --- | --- | | #SI | 831 | 762 | 855 | 898 | 842 | 867 | 892 | **912** | | AVG.T | 155.42 | 212.50 | 125.54 | 79.73 | 141.29 | 100.30 | 79.96 | **64.89** | | AVG.N | 2380.67 | 2606.40 | 2353.11 | 2065.13 | 2288.08 | 2162.19 | 2100.57 | **2023.58** | We also observe a slight decline in the performance of all heuristics, indicating that the coefficient plays a significant role as a parameter (although it may not be the most crucial one). Effectively analyzing coefficients poses a challenging problem, and we will endeavor to address this issue in the future. ``` Could you elaborate on how GRL-SVO(UP) and GRL-SVO(NUP) are complementary to each other? ``` We will revise the unexplained description in the next version. It corresponds to a case where GRL-SVO encounters a large instance. Although GRL-SVO(UP) exhibits superior performance, it is time-consuming due to the involvement of projection and interaction with symbolic calculation tools. As a compromise, we can utilize GRL-SVO(UP) to predict the initial variables for the variable order, while employing GRL-SVO(NUP) for predicting the remaining variables, or vice versa. Alternatively, cross-invoking both methods can also be considered as a viable solution. In this particular case, they complement each other effectively. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I have read the comments and new experiments conducted by the authors. The fact that their method continuously outperforms other models with varying coefficients shows the method is robust. I am happy to raise my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zrjH, We are delighted that our explanations resolve your concerns and appreciate your very encouraging feedback. Best regards, Authors
Summary: This paper proposed a new method for suggesting the variables order for Cylindrical Algebraic Decomposition (CAD) problem. Their method utilized Graph Neural Networks (GNN) and Reinforcement Learning(RL). They proposed two variants: one utilizing projection(UP) and one without projection (NUP). They test their methods on two datasets and showed that the UP method outperforms nearly all methods on nearly all tasks, while the NUP method outperforms almost all NUP method. In addition, the UP method they proposed is the first LB and UP method, according to their report. In summary, I think this is a very creative application of RL and GNN on combinatorial optimization problems, though some improvement and ablation study can be made. The writing is very good, and the comparison to the current methods is complete and well categorized. Strengths: 1. They proposed two algorithm: GRL-SVO(UP) and GRL-SVO(NUP), utilizing the techniques in RL and GNN to optimize the number of cells of CAD problem for a certain variable order. I think this idea is creative and worth investigating. 2. Their explanation to CAD and SVO(suggesting variable orders) is very clear (especially the detailed version in the appendix), and they gave many example to help people who are not familiar with CAD (like me) to quickly understand what they are doing. 3. The experiments showed that their method outperform most of the current methods on most datasets, and they also compare the UP method and NUP method they developed. 4. The writing is very good and the literature review is clear and complete. Weaknesses: In general this is a good paper. Below are my personal suggestion for a better paper. I do not expect you to add more experiments during the short rebuttal period, so you should not worry about it. 1. The interpretability is relatively weak. Compared to the Expert based methods, the learning-based method typically use some black-box model such as deep neural networks, GNN or RL to optimize the number of cells of CAD problem. However, more explanation and ablation study can be done, although in general, this GNN+RL method is a black box. For example, in EB methods, people often use some human-created features to suggest the variables order. Since you encode 14 human-crafted features in the graph embedding matrix, maybe you can show which features in the matrix are the most important? Or does this completely depend on the property of the set of polynomials? Also, since you use the adjacent matrix of association graph as well as the variable embeddings as inputs, what will happen if you simply input the adjacent matrix or variable embedding? Will the performance drop drastically? 2. There might be overfitting when you apply a large neural networks on a small set of data. In your experiments, the NN is four-layers with hundreds of hidden size, but the data size is relatively small with only thousands and even hundreds of data. So I am not sure whether there will be overfitting in your training. I think one possible solution is to try to use a smaller neural networks to train the model and evaluate it (also you can try a larger one) to see whether the performance will drop when the size increases. Another question is, I do not see how you use the validation dataset (in Line 269, you said the training : validation : testing is 8:1:1). Typically the validation set is used to tune some hyperparameters, but I do not see in the appendix that you tuned any parameter using this set. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Your suggestions are very important and helpful to improve the quality of our work. So we have arranged as many experiments as possible to further discuss our results. ``` Since you encode 14 human-crafted features in the graph embedding matrix, maybe you can show which features in the matrix are the most important? ``` Experiment 1: We conduct an experiment on the effect of features and will append it to the Appendix. We make masks for these 14 features, where the mask will set the features that we do not care about as zero: 1. One-hot masks (test the effect of a single feature), for example, to test the effect of $E_1$, the corresponding one-hot mask is (1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0). Multiplying with input feature will result in a feature vector with only $E_1$ while others are 0. 2. Operation masks (test the effect of different operations in features) will group features according to their operation type (maximum, sum, and proportion). Note that we treat $E_1, E_2$ as sub-features utilizing sum operation. - Max: $E_3,E_5,E_6,E_7,E_{13},E_{14}$ - Sum: $E_1,E_2,E_4,E_8,E_9,E_{10}$ - Prop: $E_{11},E_{12}$ 3. Object masks (test the effect of different objects in features) will group features according to their target objects (variable, term, and polynomials). - Var: $E_1,E_3,E_4,E_{13},E_{14}$ - Term: $E_5,E_6,E_7,E_8,E_9,E_{10},E_{12}$ - Poly: $E_2,E_{11}$ 4. Because degree is a common feature that most heuristics concern, testing the effect of degree is necessary. Degree masks will group features according to whether utilize degree. - Degree: $E_3,E_4,E_5,E_7,E_8,E_9$ - NoDegree: $E_1,E_2,E_6,E_{10},E_{11},E_{12},E_{13},E_{14}$ Due to space limitations, we do not list the results of a single feature. It shows that only one feature is not enough. The best performing feature is $E_{12}$ (on GRL-SVO(UP)) with #SI = 1706, AVG.T = 129.86, AVG.N = 2386.24. Table 1 in the PDF attachment shows the results of operation, object, and degree features. The sum, term, and degree may be the most important factors, as whether using Sum/Term/Degree features will result in the largest difference on performance. ``` since you use the adjacent matrix of association graph as well as the variable embeddings as inputs, what will happen if you simply input the adjacent matrix or variable embedding? Will the performance drop drastically? ``` Experiment 2: We conduct an experiment on the effect of architecture and will append it to the Appendix. We build two models: one without embedding (NO_EMB), and the other without edge (NO_EDGE). Note that the operator of GraphConv in PyG is $\mathbf{x}^{\prime} _i = \mathbf{W} _1 \mathbf{x} _i + \mathbf{W} _2 \sum _{j \in \mathcal{N}(i)} e _{j,i} \cdot \mathbf{x} _j$. As there is no edge weight in our models, actually the operator we use is $\mathbf{x}^{\prime}_i = \mathbf{W} _1 \mathbf{x} _i + \mathbf{W} _2 \sum _{j \in \mathcal{N}(i)} \mathbf{x} _j$. NO_EDGE will lead $\mathcal{N}$ to be empty, and $W_2$ will lose its effect, so it will be MLPs. Table 2 in the PDF attachment shows the results of NO_EDGE(MLP) and NO_EMB. The performance of NO_EMB drops dramatically while that of NO_EDGE is good. GRL-SVO can still outperform such models and GRL-SVO(NUP) outperforms much more. Due to NUP training time is relatively short (compared to UP), we continue to train GRL-SVO(NUP) and NO_EDGE(NUP) to 30 epochs and explore the performance of NO_EDGE under different sizes of parameters. MLP_4_512 has twice as many parameters as ours. GRL-SVO(NUP) can outperform all MLPs as shown in Figure 1. It is the advantage of the graph structure where a variable can grasp neighbor information. ``` I think one possible solution is to try to use a smaller neural networks to train the model and evaluate it (also you can try a larger one) to see whether the performance will drop when the size increases. ``` Experiment 3: We have conducted an experiment on the effect of size and will append it to the Appendix. # GNN layers (for short, #G) $\in$ {1, 2, 3, 4, 5} and # Intermidiate layer features (for short, #I) $\in$ {32, 64, 128, 256, 512}$ where Actor and Critic will keep the same proportion 2:4:1. Note that the input dimension of Actor and Critic is the same as #I. For example, if #I = 32, then the dimension of the Actor and Critic are both [32, 64, 16]. Elements in the following tables are #SI in the validation set, #SI in the testing set, respectively. The first table is the performance (#SI) of GRL-SVO(NUP) and the second is the performance (#SI) of GRL-SVO(UP). | | 32 | 64 | 128 | 256 | 512 | | --- | --- | --- | --- | --- | --- | | 1 | 1754,1737 | 1756,1741 | **1771**,1765 | 1770,1765 | 1764,1765 | | 2 | 1771,1764 | 1762,1747 | 1765,1763 | 1767,1766 | 1764,1768 | | 3 | 1765,1758 | 1756,1744 | 1766,1758 | 1761,1766 | 1770,**1769** | | 4 | 1765,1761 | 1768,1761 | 1764,1762 | 1766,1765 | 1769,1765 | | 5 | 1768,1762 | 1765,1763 | 1771,1759 | 1766,1761 | 1763,1762 | | | 32 | 64 | 128 | 256 | 512 | | --- | --- | --- | --- | --- | --- | | 1 | 1767,1759 | 1768,1758 | 1777,1774 | 1789,1789 | 1790,1793 | | 2 | 1773,1763 | 1780,1777 | 1778,1776 | 1786,1793 | 1788,**1794** | | 3 | 1781,1767 | 1783,1778 | 1782,1785 | 1788,**1794** | 1786,1793 | | 4 | 1775,1766 | 1784,1775 | 1780,1779 | **1792**,**1794** | 1788,1793 | | 5 | 1779,1779 | 1784,1781 | 1788,1797 | **1792**,1790 | 1791,1790 | The results show that (4,256) seems a better option. If the size is larger than (4, 256), i.e., (4, 512), (5, 256), (5, 512), the effect of the network dropped slightly. ``` I do not see how you use the validation dataset (in Line 269, you said the training : validation : testing is 8:1:1). ``` We have selected the best model parameters during training via the validation dataset and will append the results of selecting hyperparameters to the Appendix. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their additional experiments! I think these ablation study are very good so raised my score to eight. I will strongly support this paper to be accepted, given their creative and solid work and the fact that they added many experiments in the short rebuttal period of time. Good luck! --- Reply to Comment 1.1.1: Comment: Dear Reviewer HXCT, We are delighted that our responses have met your satisfaction. We would like to express our heartfelt gratitude for your recognition and support regarding our submission. Sincerely, Authors
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for the reviewers' valuable feedbacks, which are quite helpful to enhance the quality and clarity of our work. We have designed some experiments and attempted to complete them as many as possible during this short period. We will add the missing experiments that the reviewers are concerned about to the Appendix. The PDF attachment contains some tables and figures. **Notes on Additional Experiments** If not specified otherwise, all models are trained on the 3-var set in 10 epochs. The best model parameters are selected on the 3-var validation set, and the models are tested on the 3-var testing set. GRL-SVO (UP/NUP) are the models trained in 10 epochs for fair comparison. The performance of GRL-SVO(NUP) and GRL-SVO(UP) for 3-var testing set in 10 epoches are #SI = 1765, AVG.T = 97.78, AVG.N = 2142.98; #SI = 1794, AVG.T = 79.57, AVG.N = 2075.23, respectively. Pdf: /pdf/33ea3655354b0e335deb647c7ea7890b7f1b14dd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fairness under Noise Perturbation: from the Perspective of Distribution Shift
Reject
Summary: The authors introduce an innovative framework that enhances the fairness guarantees of a classifier in the presence of both sensitive attribute noise and label noise, considering them independently as well as in combination. Their approach incorporates theoretical guarantees and involves training a fair encoder to learn a novel data representation that ensures both fairness and accuracy. They demonstrate that by imposing bounded divergence between the noisy and clean distributions, fairness can be effectively transferred from one distribution to another. Notably, their method tackles the problem from a distribution shift perspective, eliminating the need for noise rate estimation typically required by conventional noise tolerant models. Strengths: - The introduction effectively substantiates all the claims made, including the contributions put forth by the authors. These assertions find validation through a thorough description of the methodology employed and the experiments conducted. The method section elaborates on the techniques and approaches considered, demonstrating how they align with the stated objectives. Furthermore, the experimental results provide empirical evidence that supports the claims made in the introduction. - The problem addressed in the paper is well motivated. The authors provide a comprehensive and compelling rationale for the significance and relevance of the problem. They effectively highlight the real-world implications and potential consequences of the existing limitations in the field. - The authors introduce an innovative alternative approach that effectively addresses the limitations of state-of-the-art (SOTA) methods. By identifying and highlighting the drawbacks of existing techniques (noise rate requirements), they demonstrate a clear understanding of the challenges at hand. - The methodology is clearly explained and well-organized. The paper includes sub-sections that effectively delineate different aspects of the methodology, ensuring a coherent and structured presentation. - The paper demonstrates commendable attention to reproducibility by providing thorough and detailed information regarding the experimental setup. - For the experimental evaluation, the authors take into account various types of data, including both tabular and image data. - The selection of datasets and the procedure employed to generate synthetic datasets align well with similar approaches found in the existing literature. - The evaluation conducted in the paper is both sound and comprehensive. The authors meticulously consider various aspects to ensure a robust evaluation. Weaknesses: - (Section 4, Experiments) The authors put forth a proposition to tackle the challenge of ensuring fairness in the presence of noise from a distribution shift perspective. However, in the experimental section, they fail to compare their proposal with methods that specifically address distribution shift in a fairness-aware scenario. It is worth considering that these alternative methods may also yield promising results when handling noisy sensitive and label information. Including such comparisons would provide a more comprehensive understanding of the relative performance and effectiveness of the proposed approach within the context of fairness under distribution shift. - (Section 4, Experiments) The paper presents theoretical bounds, but unfortunately, they are not evaluated empirically. While the theoretical analysis offers valuable insights and establishes the potential effectiveness of the proposed approach, the absence of empirical evaluations leaves room for uncertainty regarding its practical applicability. Empirical evaluations would have provided concrete evidence of the proposed method's performance and its ability to meet the expected bounds. - (Section 2, Fairness metrics) The discussion of fairness metrics lacks a clear structure, and I would suggest that the authors differentiate between individual and group notions of fairness, providing distinct explanations for each. Additionally, it would be beneficial for the authors to acknowledge the emergence of mini-max fairness notions, which are gaining popularity in the field. - (Section 2, Fairness-enhancing interventions) While describing the pre-, in-, and post-processing methods, the authors primarily focus on specific techniques instead of providing an overview of the general framework. Consequently, it is not accurate to claim that all preprocessing methods aim to rectify the distribution of input features, nor is it true that all in-processing methods incorporate fairness enhancement as relaxed constraints. In reality, regarding the latter, there are variations where fairness is achieved through techniques such as fairness penalizations. While these cases are commonly encountered, it would be preferable for the authors to first describe the overarching objectives of the general workflows before delving into specific specifications. This approach would provide a clearer understanding of the broader goals before examining the specific techniques used. Moreover, the authors fail to explicitly state that their method constitutes an in-processing intervention. - (Section 3.2) The authors initially discuss general distribution shift, but in line 167, they assert that they address covariate shift. It is important to note that these two types of shifts have distinct mathematical implications. Covariate shift specifically involves changes in p(x) between the source and target domains, while assuming that the functional form of p(y|x) remains unchanged. It would be beneficial for the authors to clarify which shift they are specifically addressing and how the mathematical characteristics of covariate shift come into play within their approach. Providing further clarity on this matter would help readers understand the specific focus and contributions of the proposed method in addressing the relevant shift. - (Section 2, Fairness under distribution shift) In this section, the authors overlook several pertinent works, and some of the works mentioned are not even published. However, there exists a substantial body of literature that specifically addresses the challenge of ensuring fairness guarantees under distribution shift (for a comprehensive survey, the authors can refer to [1]). It is important to differentiate between methods that solely tackle distribution shift and those specifically designed for ensuring fairness under distribution shift. Furthermore, it is worth noting that different methods consider varying levels of data availability in the target domain, and not all of them assume the availability of (X, A) pairs [1]. For instance, the work [2] cited in that section assumes the target data is not available. - (Section 2, Related works) The purpose of this section is to not only provide a description of the related works but also to establish the connection between them, elucidate the significance of these relationships, and highlight the novelty of the proposed work or its intended aim to address specific limitations. However, despite providing descriptions of various works, the authors do not explicitly specify the precise position of their work within the broader landscape. - Notation issues: After defining the notation at the beginning of Section 3, and Section 3.1, the authors employ symbols that have not been defined, such as, A in line 137, or $\mathcal{L}_{cls}$ in Eq (6). Regarding the latter, there is no specification regarding its meaning nor its mathematical form. - The paper contains several typos: line 67 after more there is a full stop, line 129 after of there should be an 'a', line 217 let should be in uppercase. [1] Barrainkua, A., Gordaliza, P., Lozano, J. A., & Quadrianto, N. (2022). A Survey on Preserving Fairness Guarantees in Changing Environments. arXiv preprint arXiv:2211.07530. [2] Rezaei, A., Fathony, R., Memarrast, O., & Ziebart, B. (2020, April). Fairness for robust log loss classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 5511-5518). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Theoretical results in the paper are presented in two different contexts: some are with respect to P, while others are with respect to Q. However, the relationship between these two contexts is not clearly established or explained. How are they related? - The framework presented in this study focuses on a binary sensitive attribute and a binary label, which may limit its applicability in more complex scenarios commonly encountered in various applications. For example, many SOTA noise tolerant approaches cited in the work can handle those situations. But can this framework be extended to support multiclass Y or multivalue S? Moreover, can it effectively handle scenarios involving multi-dimensional S? Further clarification is needed to assess the flexibility and scalability of the proposed framework in handling these additional complexities. - Does your approach primarily consider general distribution shift or does it specifically focus on covariate shift? - Why have you only chosen DI and EOd as fairness metrics? Can it be extended to other statistical notions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors do not thoroughly discuss the limitations of their method, which is an important aspect to consider. Taking inspiration from the questions raised concerning the shift type and potential implications beyond binary Y and binary S could be valuable in addressing the limitations and further refining their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment. We'll fix the typos and include the suggested reference and discussions in final paper. For **Weakness 1 (W1): Comparison with fairness under distribution shift**, **Question 1 (Q1): Connection between $P$ and $Q$**, **Q2: Non-binary setting** and **Limitations**, please refer to the global rebuttal. **[W2: Verification of theoretical bounds]** Thanks for the suggestion. We include results on empirical verification of theoretical bounds as stated in Theorem 1. For simplicity of expression, we denote $\text{EOd}^{\text{upper}} := \hat{\text{EOd}}+\frac{\eta_{00}+\eta_{01}}{1-\eta_{00}-\eta_{01}} \sqrt{\epsilon_0}+\frac{\eta_{10}+\eta_{11}}{1-\eta_{10}-\eta_{11}} \sqrt{\epsilon_1}$ as the upper-bound of $\text{EOd}$ stated in Theorem 1. Results are shown as follows: **Table 1: Empirical verification of Theorem 1 on COMPAS, Adult and CelebA dataset regarding our method. The noise rates are set the same as in Tab. 1-3 of our paper.** Dataset|$\hat{\text{EOd}}$|EOd|$\text{EOd}^{\text{upper}}$ -|-|-|- COMPAS|0.06|0.09|0.13 Adult|0.04|0.07|0.10 CelebA|0.04|0.08|0.11 This shows that the upper-bound $\text{EOd}^{\text{upper}}$ in Theorem 1 serves as a good approximation of $\text{EOd}$, which thereby verifies the practical applicability of Theorem 1. **[W3: Fairness metrics]** Thanks for the kind suggestion. We'll include more discussion and explanation on individual and group fairness notions, and clarify that our method focuses on the group fairness notion. Also, we will discuss mini-max fairness notion in group fairness in final paper attentively, so as to better distinguish between different notions. **[W4, W6, W7, Q3: Fairness-enhancing interventions and related works]** Thanks for the detailed suggestion. As the reviewer's questions are concerned about connection with related works, position of our work and discussions regarding related works on fairness, we combine the responses into one paragraph for a more complete response. We'll carefully revise the phrasing for the general framework of pre-, in-, and post-processing methods in final paper, as well as the subsection on fairness under distribution shift to include more recent works on this topic and to provide a more precise description on different assumptions in data availability in the target domain. Besides, as suggested by the reviewer, our method is an in-processing intervention. Our method are primarily focused on solving the fairness under noise perturbation problem from the perspective of covariate shift. Compared with previous work on fairness under noise perturbation, our method does not require estimation of noise rate, which reduces both the computational complexity and the potential deviation in the estimation. Compared with work on fairness under distribution shift that does not require knowledge on target data (for instance, work [1] formulates the problem as a mini-max game), our method makes several distinct contributions: - 1) we provide provable fairness guarantee under varying noise rates, due to the invertibility of normalizing flows, as stated in Theorem 1 of our paper; - 2) since we are primarily concerned about the very type of distribution shift induced by sensitive attribute noise (i.e., we intend to solve fairness under noise perturbation from the perspective of distribution shift), rather than the worst-case approximation of shift [1], our method achieves better performance in terms of fairness and minimum sacrifice in accuracy under different datasets under the very type of covariate shift induced by noisy sensitive attribute $a$. We validate this by experimental results in Tab. 1 of global response, where we compare with distribution-agnostic method [1], as suggested by the reviewer, as well as method that requires knowledge on target domain [2]. - 3) we validate in subsection 4.3 of our paper that our method also works under label shift, and our method can be applied under simultaneous exposure of both label shift and covariate shift induced by sensitive attribute noise. **[W5: Connection with fairness under distribution shift]** We are sorry for the confusion. Our discussion regarding distribution shift contains two different parts: covariate shift, which we use to model fairness under sensitive attribute noise, and label shift, which is directly related to the discussion regarding label noise in Lemma 3. We clarify that we do not intend to address fairness under distribution shift problem in general; rather, we are primarily focused on the connection between covariate shift and fairness under sensitive attribute noise, and we try to solve the problem of fairness under sensitive attribute noise from the perspective of fairness under covariate shift. **[W8: Notation issues]** Thanks for the suggestion. $A$ in Line 137 refers to the random variable that corresponds to sensitive attribute, as defined in Line 124 in our paper. $\mathcal{L}\_{cls}$ refers to classification loss. Under binary setting, we choose $\mathcal{L}\_{cls}$ to be cross-entropy loss. **[Q4: Extension to other metrics]** We would like to clarify that our primary focus in our training objective is EOd, and we report results of DI as it is a widely adopted metric for fairness. We include results on worst-group accuracy that corresponds to the mini-max notions in fairness in Tab. 3 in the attached PDF in the global rebuttal. Compared with baseline method, our method also improves worst-group accuracy, in regard of mini-max fairness. We'll include full results in final paper. [1] Rezaei, Ashkan, et al. "Fairness for robust log loss classification." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020. [2] An, Bang, et al. "Transferring fairness under distribution shifts via fair consistency regularization." Advances in Neural Information Processing Systems 35 (2022): 32582-32597. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response. Regarding W2 the upper bound is as close to the true value than the $\hat{EOd}$ is, thus why is it a better aporoximation? Besides, I still believe that a deeper evaluation needs to be carried out, including the methods that address distribution shift explicitly. Therefore, I have decided to keep the rating. --- Reply to Comment 1.1.1: Title: Follow-up to Reviewer 5tmy Comment: Thank you for your response. - 1) We would like to clarify that we do not suggest $\text{EOd}^{\text{upper}}$ a better approximation of $\text{EOd}$ than $\hat{\text{EOd}}$. Instead, our results in **[W2: Verification of theoretical bounds]** show that $\text{EOd}^{\text{upper}}$ serves as a good approximation of $\text{EOd}$. Moreover, as $\text{EOd}^{\text{upper}}$ serves as an upper-bound of $\text{EOd}$, by minimizing $\text{EOd}^{\text{upper}}$ we are minimizing the upper-bound of $\text{EOd}$, instead of the lower-bound of $\text{EOd}$ by $\hat{\text{EOd}}$. - 2) Per the reviewer's suggestion, we show comparison with methods that address fairness under distribution shift explicitly [1,2] in Tab. 1 of global rebuttal. The results show that our method achieves better improvement in fairness with better or comparable performance in accuracy. We 'll include the evaluation results and discussions in final paper. [1] Rezaei, Ashkan, et al. "Fairness for robust log loss classification." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020. [2] An, Bang, et al. "Transferring fairness under distribution shifts via fair consistency regularization." Advances in Neural Information Processing Systems 35 (2022): 32582-32597.
Summary: The paper aims to improve the performances of fair training when the group attributes or labels in the training data have noisy information. The paper views the noisy training data problem as a kind of distribution shift, where the training data is noisy and the test data is clean. To address this issue, the paper proposes a fair representation learning method to reduce the impact of distribution differences. The paper also provides some theoretical analyses to show the relationship between the group fairness results and noisy training data. In the experiment, the paper uses three datasets and compares with several baselines to show the performance gains of the proposed method. Strengths: S1. The paper solves an important research problem, preserving the performances of fair training under the noisy training data. The paper views this problem as a distribution shift issue. S2. The paper gives some theoretical analyses on the relationship between the group fairness and noisy data. S3. The proposed algorithm empirically shows better fairness and accuracy performances compared to the baselines. Weaknesses: W1. Many important details are missing in the proposed fair representation learning. - In Section 3.3, the final training objective in Eq. (6) has many unexplained important details. For example, what is L_cls, and how are the input arguments (e.g., g_00, h) used in L_cls? Also, it seems the lambda values are the tuning knobs, but there is no explanation on why the loss terms should be connected by two lambda values. Including these details, a clearer rationale for design choices is needed. W2. In experiments, the proposed algorithm is not clearly analyzed. For example, it would be much better if the paper explains the following. - How the lambda values in Equation 6 affect the training performances - The computational complexity of the proposed algorithm W3. Although this work is highly related to the studies on fairness under data distribution shifts, there are no clear comparisons with them. In experiments, all the baselines are from the noisy training literature. Since many algorithms for fair training under distribution shifts have been recently proposed, it would be better to compare with them empirically or at least to be clearly discussed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: All questions are included in the above weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper did not discuss the limitations and possible negative societal impacts. As the limitations, this work may discuss which types of data noises cannot be handled by the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment. For **Weakness 3: Comparison with fairness under distribution shift** and **Limitations**, please refer to the '**Comparison with fairness under distribution shift**' and '**Limitations**' parts in global rebuttal. **[Weakness 1 (W1): Details of training objective]** We are sorry for the confusion. Under binary classification, $\mathcal{L}\_{cls}$ can be chosen as cross-entropy loss, i.e., $\mathcal{L}\_{cls}=-\frac{1}{N}\sum_{i=1}^N [y_i\log(h(g_{y_i a_i}(x_i)))+(1-y_i)\log(1-h(g_{y_i a_i}(x_i)))]$. For classification, our framework involves two parts as defined in Line 183-184, a bijective encoder $g$, which maps the input feature to a latent representation, and the classification head $h$, which maps the latent representation to the predicted soft label. We use two different hyperparameters for $\mathcal{L}\_{0}$ and $\mathcal{L}\_{1}$, as we formulate sensitive attribute noise to be both group- and class-dependent. This leads to different coefficients in terms of disparities in TPR and disparities TNR under clean data, compared with those under noisy data: $$ |\hat{\text{TPR}\_0} - \hat{\text{TPR}\_1}| = |(1-\eta\_{10})\text{TPR}\_0 + \eta\_{10}\text{TPR}\_1 - (1-\eta\_{11})\text{TPR}\_1 - \eta\_{11}\text{TPR}\_0| = (1-\eta\_{10}-\eta\_{11})\text{DTPR}, $$ $$ |\hat{\text{TNR}\_0} - \hat{\text{TNR}\_1}| = |(1-\eta\_{00})\text{TNR}\_0 + \eta\_{00}\text{TNR}\_1 - (1-\eta\_{01})\text{TNR}\_1 - \eta\_{01}\text{TNR}\_0| = (1-\eta\_{00}-\eta\_{01})\text{DTNR}. $$ Therefore, the hyperparameters $\lambda\_0$ and $\lambda\_1$ for fairness regularization in Eq. 6 are not necessarily identical, so as to align with the possible difference in noise rates, and connecting $\mathcal{L}\_{0}$ and $\mathcal{L}\_{1}$ with different hyperparameters brings us more flexibility in the presence of sensitive attribute noise. We'll include more details regarding the training objective in final paper to provide a clearer rationale for the design choices. **[W2: Effect of $\lambda$ values]** Thanks for the suggestion. We include more results on the trade-off between fairness and accuracy as $\lambda_0$ and $\lambda_1$ vary under different noise ratios in Fig. 1 of global rebuttal. As shown in the figure, under different noise rates, our method shows similar fairness-utility trade-off, where fairness gradually improves as $\lambda_0$ and $\lambda_1$ increases, and the fairness improvement becomes smaller as $\lambda_0$ and $\lambda_1$ increase. **[W2: Computational complexity]** The update of our normalizing flow framework involves computing the determinant for each layer at each training iteration. Generally, the time complexty of this operation is $\mathcal{O}(n^3)$, where $n$ is the input feature dimension [1]. **[Potential societal impacts]** One possible societal impact is that, while our method deals with fairness under noise perturbation, we still need access to noisy sensitive information during training. There could lead to the concern privacy issues, even though with a reduced risk due to noise perturbation. We'll add the discussion in final paper. [1] Keller, Thomas A., et al. "Self normalizing flows." International Conference on Machine Learning. PMLR, 2021. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: I appreciate the author's response. After reading the response, many of my concerns are resolved. I thus updated my score. There is one follow-up question regarding computational complexity. It seems the O(n^3) time complexity can be a notable bottleneck in large-scale settings. It would be helpful if the revised version of the paper could give any suggestions on handling such scenarios. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for taking the time and effort to review our work and we appreciate your recognition of our work. For the computational complexity, we have $n$ the input feature dimension, and different approaches on flow-based model have been proposed to reduce the computational complexity [1,2,3]. We 'll include more discussions on this topic in final version. [1] Hoogeboom, Emiel, Rianne Van Den Berg, and Max Welling. "Emerging convolutions for generative normalizing flows." International conference on machine learning. PMLR, 2019. [2] Keller, Thomas A., et al. "Self normalizing flows." International Conference on Machine Learning. PMLR, 2021. [3] Caterini, Anthony L., et al. "Rectangular flows for manifold learning." Advances in Neural Information Processing Systems 34 (2021): 30228-30241.
Summary: The paper studies the fairness problem under noise perturbation on both label and sensitive attributes. In particular, it considers such a problem from the perspective of distribution shift and uses the normalizing flow framework to analyze the problem. Empirically, the proposed methods achieve the best utility and fairness trade-offs under different settings of noise perturbation. Strengths: 1. The paper presents a method for learning fair representation when there is noise on both sensitive attributes and labels. The method is straightforward and empirically shown to be effective. 2. The theoretical analysis is sound 3. Compared to the previous work, this work considers both label and sensitive attribute noise without directly estimation the noise parameter, which is more practical in real-world applications. Weaknesses: I do not find any obvious weaknesses in the paper. But there are minor points that the author could further improve their paper. 1. The assumption of invertible function in the fair normalizing flow methods might be strong. For example, In ResNet, the default activation function is ReLU, which is not invertible. The authors might need to provide more justification for this. 2. Discussion of limitations. The paper could be improved if there is a discussion of the limitations. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Is there a utility-fairness trade-off tuning parameters in your methods? If yes, how does the utility-fairness trade-offs change given different noise rate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors do not discuss the limitations of the work, which is highly suggested. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment. For **Weakness 2: Discussion of limitations**, please refer to the '**Limitations**' part of global rebuttal. **[Weakness 1 (W1): Assumption of invertible function in normalizing flow]** The invertibility is a basic formulation in normalizing flow methods [1,2,3]. We choose normalizing flow framework as it not only provides promising performance in fairness [4], but it also enables us to compute the exact likelihood in the latent space, which provides us provable fairness guarantee in terms of the statistical divergence between latent representations of different subgroups. Moreover, while certain activation functions including ReLU are not invertible and can not be applied to normalizing flow, recent work on normalizing flow has shown promising performance compared with state-of-the-art methods [5,6,7]. **[Question 1: Utility-fairness trade-off]** Thanks for the suggestion. We include results on fairness-utility trade-off in Fig. 1 of global rebuttal. Under different noise rates, our method shows similar fairness-utility trade-off, where fairness gradually improves as $\lambda\_0$ and $\lambda\_1$ increase, and the fairness improvement becomes smaller as $\lambda\_0$ and $\lambda\_1$ increase. [1] Kobyzev, Ivan, Simon JD Prince, and Marcus A. Brubaker. "Normalizing flows: An introduction and review of current methods." IEEE transactions on pattern analysis and machine intelligence 43.11 (2020): 3964-3979. [2] Papamakarios, George, et al. "Normalizing flows for probabilistic modeling and inference." The Journal of Machine Learning Research 22.1 (2021): 2617-2680. [3] Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015. [4] Balunović, Mislav, Anian Ruoss, and Martin Vechev. "Fair normalizing flows." arXiv preprint arXiv:2106.05937 (2021). [5] Izmailov, Pavel, et al. "Semi-supervised learning with normalizing flows." International Conference on Machine Learning. PMLR, 2020. [6] Mackowiak, Radek, et al. "Generative classifiers as a basis for trustworthy image classification." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. [7] Wang, Tianchun, et al. "GC-Flow: A Graph-Based Flow Network for Effective Clustering." arXiv preprint arXiv:2305.17284 (2023).
Summary: This work studies noise tolerance of fairness from the perspective of subpopulation/subgroup shift, by considering the perturbation of the sensitive attributes as well as the labels _without_ the need for noise-rate estimation - by considering the noisy distribution as the source and clean distribution as the target. This leads to a "covariate" shift between the source and the target distributions, with the shift being a consequence of the noise. The work then proposes a fair representation learning method for fairness under noisy attributes based on normalizing flows, and presents a theoretical result showing that this method minimizes the upper-bound of the clean equalized odds. Thorough empirical evaluation is presented for both static and varying noise rates, showing the efficacy of the proposed method. Strengths: This work addresses an important setting of achieving fairness when both the label $y$ and sensitive attribute $a$ can be noisy - as traditional metrics can be biased under noisy data. The theoretical analysis presents a comprehensive study of fairness transfer between clean and noisy data and supports the choice of the minimizer in the proposed method. The empirical analysis is thorough - the section 4.2.2 is especially interesting as it considers both static and dynamic noise rates. Weaknesses: The following points should be considered: 1. The loss function (in Eq. 6) focuses on a binary valued $a$ and $y$. How does this methodology extend to the more general case either/both can be multi-valued? Is it straightforward? 2. Is there any more intuition on leveraging normalized flows for this setting? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see Weakness section. A minor suggestion: 1. Consider using a bigger text size for the plots as they are unreadable. 2. Consider using more formal English: e.g. line 67 "What's more.." Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No limitations of this method are mentioned, and it would be nice if any potential drawbacks can be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment. We'll refine the writing and adjust the text size for plots in final paper. For **Weakness 1 (W1): Multi-valued version of loss function** and **Limitations**, please refer to the '**Non-binary setting**' and '**Limitation**' parts of global rebuttal. **[W2: Using normalizing flow]** As discussed in section 3.2, our main goal is to minimize the divergence between distributions of predicted soft labels in different subgroups by minimizing the divergence between distributions of data in different subgroups. Normalizing flow enable us to compute the exact likelihood in the latent space [1,2,3], which can be used to provide provable fairness guarantee in terms of the statistical divergence between latent representations of different subgroups. [1] Dinh, Laurent, Jascha Sohl-Dickstein, and Samy Bengio. "Density estimation using Real NVP." International Conference on Learning Representations. 2016. [2] Rezende, Danilo, and Shakir Mohamed. "Variational inference with normalizing flows." International conference on machine learning. PMLR, 2015. [3] Kobyzev, Ivan, Simon JD Prince, and Marcus A. Brubaker. "Normalizing flows: An introduction and review of current methods." IEEE transactions on pattern analysis and machine intelligence 43.11 (2020): 3964-3979. --- Rebuttal Comment 1.1: Comment: Thank you Authors for clarifying, I acknowledge that I have gone through the rebuttal. I would recommend adding the analysis on multi-valued $a$ and $y$ in the revised paper. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, Thank you for taking the time and effort to review our work and we appreciate your recognition of our work. We 'll include the analysis on multi-valued $a$ and $y$ in the revised paper accordingly. Best, Authors
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comment and we would like address issues that are widely concerned by reviewers in this global rebuttal. **[Comparison with fairness under distribution shift]** We compare our method with two state-of-the-art methods [1,2] on fairness under distribution shift under the same setting as the experiments in our paper. Results are shown in Tab. 1 in the attached PDF. Compared with methods on fairness under distribution shift, our method perform better or competitively in terms of fairness and accuracy, which validates the effectiveness of our method. We'll include the results in final paper. **[Connection between $P$ and $Q$]** Let $f$ be function of classifier (namely the composition of bijective encoders $g\_{ya}$ and classification head $h$ for our method, as defined in Line 183-184), we have $f$ pushes $\hat{P}\_{ya'}$ and $\hat{P}\_{ya}$ forward to $\hat{Q}\_{ya'}$ and $\hat{Q}\_{ya}$. Therefore, by data processing inequality [3] we have $$ D\_{KL}(\hat{P}\_{ya'}||\hat{P}\_{ya}) \ge D\_{KL}(\hat{Q}\_{ya'}||\hat{Q}\_{ya}), $$ which suggests that by minimizing the divergence between $\hat{P}\_{ya'}$ and $\hat{P}\_{ya}$ we are able to minimize the upper-bound of divergence between $\hat{Q}\_{ya'}$ and $\hat{Q}\_{ya}$. Similar to Eq. 3 in our paper, we have the following relationship regarding $\hat{Q}\_{ya}$ and $Q\_{ya}$ by the formulation of noise perturbation: $$ D\_{K L}\left(\hat{Q}\_{ya}||Q\_{ya}\right)=\int \hat{Q}\_{ya} \log \frac{\hat{Q}\_{ya}}{Q\_{ya}}=-\int \hat{Q}\_{ya} \log \left[\frac{1-\eta\_{ya^{\prime}}}{1-\eta_{ya}-\eta_{ya^{\prime}}}-\frac{\eta_{ya} \frac{\hat{Q}\_{ya^{\prime}}}{\hat{Q}\_{ya}}}{1-\eta\_{ya}-\eta\_{ya^{\prime}}}\right], $$ which suggests that by minimizing the divergence between $\hat{Q}\_{ya'}$ and $\hat{Q}\_{ya}$ we also minimizes the divergence between $\hat{Q}\_{ya}$ and ${Q}\_{ya}$. Specifically, since $D_{KL} \ge 0$, when $D\_{KL}(\hat{P}\_{ya'}||\hat{P}\_{ya})=0$, we also have $D\_{KL}(\hat{Q}\_{ya}||Q\_{ya})=0$. **[Non-binary setting]** Our methodology can be readily generalized to multi-classes (i.e., multi-valued $Y$) and multi-valued sensitive attributes (i.e., multi-valued $A$). Similar to the binary setting, we have different subgroups specified by label and sensitive information. Therefore we apply bijective encoder $g_{ya}: \mathbb{R}^n \rightarrow \mathbb{R}^d, \mathbf{x} \mapsto \mathbf{z}$ to map sample $\mathbf{x}$ in the corresponding subgroup $\\{Y=y,A=a\\}$ to latent representation $\mathbf{z}$, and adjust the classification head $h: \mathbb{R}^d \rightarrow \\{0,1\\}^c, \mathbf{z} \mapsto \mathbf{y}$ accordingly to fit with the class number $c$. The corresponding network contains several bijective encoders $g_{ya}$ and one universal classification head $h$. The $\mathcal{L}\_0$ and $\mathcal{L}\_1$ terms in Eq. 6 in our paper now becomes pair-wise symmetrized divergence between subgroups, i.e., $\mathcal{L}\_y = \sum_{a} \sum_{a'\neq a} D\_{KL}(P\_{z\_{ya}},P\_{z\_{ya'}}) + D_{KL}(P\_{z\_{ya'}},P\_{z\_{ya}})$ and $\mathcal{L}\_{cls}$ in Eq. 6 in our paper becomes negative log likelihood loss for multi-class classification. We include results on COMPAS dataset under **multi-dimensional $A$** in Tab. 2 in the attached PDF to empirically verify the generalization. The sensitive attribute is chosen as $\text{race} \times \text{sex}$ the vector $[a_{\text{race}},a_{\text{sex}}]$. Under multi-dimensional $A$, our method still shows significant improvement in terms of fairness compared with baseline with relatively small sacrifice in accuracy. We also include results on CRIME dataset [4] to validate the performance of our method under **multi-class classification**. The task is to predict the number of violent crimes per $10^5$ population, and we divide the numbers into $K = 4$ classes based on equidistant quantiles. The sensitive attribute is chosen as ethnicity. As shown in Tab. 2 in the attached PDF, our method achieves remarkable improvement in fairness with relatively small decrease in classification accuracy under multi-class scenarios. We'll include full results in final paper. **[Limitations]** One potential limitation is the formulation of noise perturbation, which can be instance-dependent, or the noise perturbation between different subgroups can be correlated, rather than independent. Under such scenarios, we may need potential adjustment to our framework, as fairness measures under clean and noisy data can have more complicated relationships. We'll add the discussion in final paper. [1] Rezaei, Ashkan, et al. "Fairness for robust log loss classification." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020. [2] An, Bang, et al. "Transferring fairness under distribution shifts via fair consistency regularization." Advances in Neural Information Processing Systems 35 (2022): 32582-32597. [3] Makur, Anuran, and Lizhong Zheng. "Bounds between contraction coefficients." 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2015. [4] Redmond, Michael. Communities and Crime. UCI Machine Learning Repository. (2009). Pdf: /pdf/31b178c1daa7629660dab65e258a2fcb29a9e6c4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper targets the problem of ensuring fairness with noises on either sensitive attributes or labels. Specifically, this paper models the noisy data training set and clean test set as a distribution shift and proposes a regularization term to improve the fairness of classifiers. The theoretical analysis indicates that the classifier trained by the proposed framework on the noise data can be bounded when evaluating on the clean data. Strengths: 1. The problem of ensuring fairness when training on noisy data is an important problem. 2. The whole framework makes sense. 3. The experimental results show the advantage of the proposed approach. Weaknesses: The writing is not very friendly to readers without a background in this specific fairness problem. Please check my questions below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. One contribution claimed in this paper is that the proposed framework does not require noise rate estimation. However, if I understand correctly, \lambda in the objective function (Eq 6) is a function of $\eta$ and $\hat{\alpha}$ defined in Lemma 1. While the $\eta$ indicates the noise rate in a specific subgroup, I am not convinced that not requiring the noise rate estimation is a contribution. It seems like the proposed framework needs knowledge of the noise rate for each subgroup in advance. 2. As I am not an expert in the fairness issue under the noise data setting, some equations are not very straightforward to me, such as the equation under Line 138 and Eqs 1 and 2. It would be better to give more explanations on those equations. Especially, Eq 2 is important for the conclusion described in Line 177. 3. The proposed framework models the noisy training data and clean testing data from the perspective of distribution shift. The high level idea makes sense to me. However, I do not follow the description between lines 173-174. Why does minimizing the divergence between \hat{P}_{ya} and \hat{P}_{ya'} lead to the minimization of the divergence between $Q_{ya}$ and $\hat{Q}_{ya}$? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please check my questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment. For **Question 3: Connection between KL-divergence**, please refer to the '**Connection between $P$ and $Q$**' part of global rebuttal. **[Question 1 (Q1): Requirement of noise rate estimation]** We are sorry about the confusion. $\lambda_0$ and $\lambda_1$ in the objective function (Eq. 6) are hyperparameters that control the trade-off between classification loss and fairness regularization, and are **different** from the $\lambda_a$ defined in Lemma 1. Tuning hyperparameters $\lambda_0$ and $\lambda_1$ in the objective function (Eq. 6) does not require noise rate estimation. We'll revise the notions in final paper to avoid repeated use. **[Q2: Explanation of line 138, Eq. 1 and 2]** We are sorry for the confusion and we'll include more explanation regarding the formulation in final paper. For Eq. 1, our formulation of $\eta_{ya}:= p\left[A \neq \hat{A}|Y=y,\hat{A}=a\right]$ assumes random flips within different sensitive subgroups, and flips between different subgroups are independent. Therefore, we can decompose the distribution of data $\hat{P}\_{ya}$ under noisy sensitive group $\\{Y=y,\hat{A}=a\\}$ as compositions of distributions under clean sensitive groups $P_{ya}$ and $P\_{ya'}$: $$ \hat{P}\_{ya} = \eta_{ya} P_{ya'} + (1-\eta_{ya}) P_{ya}, $$ where $\eta_a P_{ya'}$ resembles samples from the clean subgroup $\\{Y=y,A=a'\\}$ whose sensitive information are flipped, and $(1-\eta_a) P_{ya}$ resembles samples from the clean subgroup $\\{Y=y,A=a\\}$ whose sensitive information remain unchanged (i.e., not affected by noise). Therefore we have the following equation set regarding $P_{ya}$, $P_{ya'}$, $\hat{P}\_{ya}$ and $\hat{P}\_{ya}$: \begin{align*} \hat{P}\_{ya} &= \eta_{ya} P_{ya'} + (1-\eta_{ya}) P_{ya}, \\\\ \hat{P}\_{ya'} &= \eta_{ya'} P_{ya} + (1-\eta_{ya'}) P_{ya'}. \end{align*} By solving the equation set above we are able to obtain the expression of $P\_{ya}$ in terms of $\hat{P}\_{ya}$ and $\hat{P}\_{ya'}$ as in Eq. 2. Line 138 follows similar formulation, except that the noise rate $\eta_a := p\left[A \neq \hat{A}|\hat{A}=a\right]$ now becomes identical within each sensitive group, and we have $\hat{P}\_{ya} = \eta_a P\_{ya'} + (1-\eta_a) P\_{ya}$ and $\hat{P}\_{a} = \eta\_a P\_{a'} + (1-\eta\_a) P\_{a}$ under such formulation. The fairness measures under $\eta_a$ follows by substituting $\hat{P}\_{a}$ and $\hat{P}\_{ya}$ with the corresponding clean distributions: \begin{align*} \hat{\text{DI}} &= \int\_{0.5}^{1}|\hat{P}\_{0}-\hat{P}\_{1}| = \int\_{0.5}^{1}|(\eta\_0 P\_{1} + (1-\eta\_0) P\_{0}) - (\eta\_1 P\_{0} + (1-\eta\_1) P\_{1})| \\\\ &= (1-\eta\_0-\eta\_1) \int\_{0.5}^{1}|{P}\_{0}-{P}\_{1}| = (1-\eta\_0-\eta\_1)\text{DI}, \end{align*} \begin{align*} \hat{\text{EOd}} = & \int\_{0.5}^{1}|\hat{P}\_{10}-\hat{P}\_{11}| + \int_{0.5}^{1}|\hat{P}\_{00}-\hat{P}\_{01}| \\\\ = &\int\_{0.5}^{1}|(\eta\_0 P\_{11} + (1-\eta\_0) P\_{10}) - (\eta\_1 P\_{10}+ (1-\eta\_1) P\_{11})| \\\\ &+ \int\_{0.5}^{1}|(\eta\_0 P\_{01} + (1-\eta\_0) P\_{00}) - (\eta\_1 P\_{00} + (1-\eta\_1) P\_{01})|= (1-\eta\_0-\eta\_1)\text{EOd}. \end{align*} --- Rebuttal Comment 1.1: Comment: Thanks, authors, for the clarification. I suggest authors include more details in the next version. Meanwhile, as $a$ could be either 0 or 1, it would be better to use other notations to represent the hyper-parameters in Eq 6. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer, Thank you for taking the time and effort to review our work. We sincerely appreciate your constructive feedback and we will revise our final paper accordingly. Best, Authors
null
null
null
null
null
null
Exploiting Correlated Auxiliary Feedback in Parameterized Bandits
Accept (poster)
Summary: In this paper, the authors studied a variant of the parameterized bandits problem where the learner has access to auxiliary feedback that is correlated with the observed reward. The authors proposed a method that leverages the auxiliary feedback to construct a reward estimator with more accurate confidence bounds, resulting in better regret bounds. The paper provides a characterization of the regret reduction in terms of the correlation coefficient between the reward and auxiliary feedback. Finally, they demonstrated the effectiveness of their method via numerical experiments. Strengths: 1. The setting studied in this paper is interesting and realistically challenging, as it is often unclear how to use such auxiliary feedback for better online decision-making. 2. The paper is well organized. The theoretical results also appear sound. 3. The authors have provided a comprehensive literature review on existing research related to this work, and clearly explained the connection between their work and prior works in control variate theory. Weaknesses: It would be good if the authors could provide further discussions and clarifications on the following points: 1. The description of the algorithm (OFUL-AF) could be made clearer. Currently, it appears as a rephrasing of each step without clear connections to prior derivations. It would be helpful to explain how each step relates to the preceding derivations. 2. The procedure of selection of the number of auxiliary feedback $q$ in practice is not entirely clear to me. It'd be helpful to have more details regarding this point in addition to Remark 1. 3. The regret bound in Theorem 2 contains hidden constant terms. Could you elaborate more on the magnitude of these constant terms and their impact on the overall result? 4. The authors acknowledge that the actual form of the auxiliary feedback functions is typically unknown in practice, which I consider to be an important realistic challenge that needs to be dealt with. While Section 4 explores the effect of using estimated functions, I wonder how one can obtain an unbiased estimator for the auxiliary feedback functions in the first place. This is a challenging aspect that merits further explanation. 5. How should I comprehend the terms $a(e)$ and $\rho_e$ from Theorem 3? What do they each represent and do they constitute certain kind of tradeoff within the regret bound? 6. It'd be helpful to also see how the proposed model and method can be applied to certain real-world data. This is related to my point (4) above as I'm not entirely sure how one'd make sense of the auxiliary feedback function in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and suggestions. We have responded to each of the questions below. >### *1. The description of the algorithm (OFUL-AF) could be made clearer. Currently, it appears as a rephrasing of each step without clear connections to prior derivations. It would be helpful to explain how each step relates to the preceding derivations.* To improve the readability, we will add more explanations and connections to prior results while explaining each step of OFUL-AF in the revised version of the paper. >### *2. The procedure of selection of the number of auxiliary feedback $q$ in practice is not entirely clear to me. It'd be helpful to have more details regarding this point in addition to Remark 1.* The variance reduction given in Theorem 1 depends on different auxiliary feedback via two terms: $\frac{t-2}{t-q-2}$ and $\rho^2$ (defined in Line 142). As we increase the number of auxiliary feedback, $\rho^2$ will increase, leading to more variance reduction. However, simultaneously, the term $\frac{t-2}{t-q-2}$ will also increase, negating the gain in variance reduction. Therefore, it is recommended to use a small number of auxiliary feedback to avoid the degradation in variance reduction. One of the possible procedures for the selection of the number of auxiliary feedback is given in Lavenberg et al., 1982 (specifically on page 196). >### *3. The regret bound in Theorem 2 contains hidden constant terms. Could you elaborate more on the magnitude of these constant terms and their impact on the overall result?* There are two constants: one is the multiplicative constant $C_t$ (defined in Line 528 in the Appendix), whose value goes to $1$ as $t$ increases. Another is an additive constant (more details after Line 530 in the Appendix), which is comparatively negligible. >### *4. The authors acknowledge that the actual form of the auxiliary feedback functions is typically unknown in practice, which I consider to be an important realistic challenge that needs to be dealt with. While Section 4 explores the effect of using estimated functions, I wonder how one can obtain an unbiased estimator for the auxiliary feedback functions in the first place. This is a challenging aspect that merits further explanation.* Based on the application, a learner may have already collected auxiliary feedback (historical data) or may collect auxiliary feedback independent of the reward sample (e.g., having access to cheap low-fidelity simulations) or even collect additional samples of auxiliary feedback without a reward sample, e.g., an online food platform will record the food delivery time (auxiliary feedback) for every order but may not get a user rating (reward) for each order. In practice, one can use an appropriate variant of our proposed method depending on the need of their application (EH and BE variants shown in Fig. 2(a)-2(c)). However, it will be challenging to quantify the gain in variance reduction (as in Theorem 3) for these variants. >### *5. How should I comprehend the terms $a(e)$ and $\rho_e$ from Theorem 3? What do they each represent and do they constitute certain kind of tradeoff within the regret bound?* The term $a(e)$ comes due to different sampling methods used by IS and MF sampling strategies for estimating the unknown auxiliary functions. As given in Theorem 3, the value of $a(IS)$ (for IS sampling strategy) is 1, whereas it is $\frac{r-1}{r}$ for MF sampling strategy having $r_i = r$ for each auxiliary variable. The term $\rho_e$ denotes the multiple correlation coefficient of reward and its auxiliary feedback when using IS and MF sampling strategies. Its value depends on the sampling strategy (Line 231 for definition and Eqs. before Line 239). From the expression given in Line 282, smaller $a(e)$ and larger $\rho_e$ lead to smaller regret. >### *6. It'd be helpful to also see how the proposed model and method can be applied to certain real-world data. This is related to my point (4) above as I'm not entirely sure how one'd make sense of the auxiliary feedback function in practice.* Since it is common in bandit literature to measure the performance of bandits algorithms on synthetically generated data, we have also validated our method using different bandit instances. In the future, we will set up an elaborate experiment to demonstrate the effectiveness of our algorithms on real datasets. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My main remaining concern is about the assumed knowledge of the auxiliary feedback functions, or its unbiased estimator. Could you elaborate on how you would actually construct an unbiased estimator by using historical data or acquiring more samples of auxiliary feedback? Given the noisy environment one would usually face in practice, it'd be natural to expect the existence of bias. However, it seems that if the bias gets large the results can worsen quite a bit. Is there a way to theoretically quantify how much your regret might worsen given the extent of bias? Given my concern above, I also think experiments on real-world data would be a necessity here. I understand that it is common in bandits literature that synthetic experiments are adopted; nevertheless, given the authors' claim that auxiliary feedbacks are closely connected with real-life applications, while assumptions in this paper do not always apply to real-world settings (as discussed by reviewers above), it's important to understand what kind of modifications one might need to make to the proposed method for real-world scenarios. Due to the above reasons, I will keep my score as borderline. --- Reply to Comment 1.1.1: Title: Auxiliary feedback and experiments Comment: Thank you for acknowledging our rebuttal. Here are our responses to your questions. > ### *My main remaining concern is about the assumed knowledge of the auxiliary feedback functions, or its unbiased estimator. Could you elaborate on how you would actually construct an unbiased estimator by using historical data or acquiring more samples of auxiliary feedback?* There are many applications (as mentioned in Lines 114-115) where auxiliary feedback can be constructed such that the corresponding auxiliary functions are known. However, when these auxiliary functions are unknown, we must estimate them to exploit the correlation between reward and its auxiliary feedback to get a better reward function estimator (i.e., estimator has tight confidence bounds). For constructing an unbiased estimator of auxiliary function, let us assume a linear relationship exists between features of action (or context-action) and corresponding auxiliary feedback, i.e., the auxiliary function is linear. When we have extra samples of auxiliary feedback (historical data or acquiring separately without the rewards samples), the solution of ordinary least square (for samples > numbers of features) gives an unbiased estimator, i.e., $\mathbb{E}[\hat{g}_m] = g$, where $\hat{g}_m$ is an ordinary least square estimator of the auxiliary function $g$ and uses $m$ samples. However, getting an unbiased auxiliary function estimator for any arbitrary non-linear auxiliary function may be difficult. > ### *Given the noisy environment one would usually face in practice, it'd be natural to expect the existence of bias. However, it seems that if the bias gets large the results can worsen quite a bit. Is there a way to theoretically quantify how much your regret might worsen given the extent of bias?* We agree with the reviewer's observation that the performance of our method will decline (i.e., regret increases, as shown in Fig. 2(d)) as bias in estimated auxiliary feedback increases. We have not theoretically quantified the relationship between regret and bias in estimated auxiliary feedback in this paper. To do this, one has to construct new confidence intervals for the reward function estimator with biased hybrid rewards (as biased auxiliary feedback will affect the hybrid rewards as defined in Eq. (3)). > ### *Given my concern above, I also think experiments on real-world data would be a necessity here. I understand that it is common in bandits literature that synthetic experiments are adopted; nevertheless, given the authors' claim that auxiliary feedbacks are closely connected with real-life applications, while assumptions in this paper do not always apply to real-world settings (as discussed by reviewers above), it's important to understand what kind of modifications one might need to make to the proposed method for real-world scenarios.* Our work is mainly theoretical, and it is the first to demonstrate how one can use correlated auxiliary feedback (whenever available) to improve the performance (i.e., minimize the regret) of parameterized bandit algorithms. To establish the performance gains analytically, we have assumed (apart from common assumptions used in parameterized bandit papers) that the estimators of the auxiliary functions are unbiased. We agree that our assumptions may not hold in every real-world setting, but this is also true for many bandit algorithms. Our experimental goal is to verify our theoretical results (using correlated auxiliary feedback leads to smaller regret than existing parameterized bandit algorithms) and different properties of our proposed method (e.g., variation in regret and correlation between reward and its auxiliary feedback and regret variation with numbers of auxiliary feedback [in rebuttal]). To illustrate this, we used synthetic problem instances as it is easier to verify our theoretical results. We agree with the reviewer that our method may not be directly used in practice, and one has to make appropriate changes to adapt our method to their problems. *We hope that our answers will improve your opinion of our work. If you have additional questions, we would be happy to answer them.*
Summary: This paper leverages the method of control variates to obtain reward estimates with smaller variance for contextual bandit algorithms, since smaller variance in reward estimation means tighter confidence bound estimation and therefore smaller regret. Estimation methods for both unknown and known auxiliar feedback functions are provided, where both theoretical and empirical results were provided to demonstrate the effectiveness of the proposed solution. Strengths: Leveraging available auxiliar feedback to improve reward estimation is an important and also meaningful approach for improving bandit algorithms. This paper provides a viable solution to realize the goal. The provided solution is an extension of Verma and Hanawal’s NeurIPS’2021 work in multi-armed bandits, and the authors further extended it to contextual bandit problems, especially with the non-linear reward functions. Weaknesses: The means of using control variates to reduce the reward estimation variance is not a completely new idea, as the authors pointed out in the related work discussions. And the estimation techniques employed in this paper were also borrowed from prior works. For example, the results under known auxiliary feedback function is a straightforward extension of Verma and Hanawal’s NeurIPS’2021 work on top of OFUL’s analysis, and the results for the unknown auxiliary feedback function is also not super challenging (e.g., using existing estimation technique and assuming the reward variance is known). This to certain degree limits the novelty of this work. The description for the unbiased linear estimator $\beta_e$ is unfortunately unclear and could be actually problematic. Based on the problem setup, my understanding is that every arm pull reveals all $q$ dimensions of the auxiliary feedback, in addition to the reward feedback. And the different sampling/partitioning methods, e.g., IS or MF, decide how to allocate those samples for estimating the reward function and auxiliary feedback function. Hence, the total number of samples for estimation at time $t$ is $t$, but each sample has $q+1$ dimensions. But in line 302, it states to maintain $r=2$, it needs to “getting one extra sample of auxiliary feedback in each round”. This seems to suggest the algorithm can pull another arm and only require the auxiliary feedback for free. If this is the case for algorithm design, it is an unfair advantage to the algorithm, comparing to the baseline bandit algorithms, as it can collect more information about the reward. Or in other words, why not further increase the ratio $r$ to better exploit this advantage? The EH variant of this algorithm further confirmed my understanding: it is assumed more observations about the auxiliary feedback are available. The experiment settings were also overly simplified, which do not strongly support the advantages or demonstrate the limitations of the proposed solution. For example, there is only one dimension of auxiliary feedback. Given the algorithm’s theoretical performance depends on the dimension of auxiliary feedback, it is important to vary its dimension to investigate its practical impact. Another factor should be mentioned is that the paper only addresses the finite arm setting (especially in the experiments), though in contextual bandit problems infinite arms with potentially adversarial context arrival is believed to be a more general setting. Otherwise, simple algorithms can already achieve satisfactory regret: for example, a greedy algorithm can obtain sublinear regret with sufficient context diversity and logarithmic regret is also achievable under stochastic context distributions. It would be meaningful to discuss how the developed algorithm can be extended to this more general and also more challenging environment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Since we are using linear regression for $\beta$ estimation, why is $W^\top_t W_t$ guaranteed to be invertible? Even when we have more samples than the number of auxiliary dimensions, i.e., $t>q+2$, the observations of auxiliary feedback might not span the entire $q$ dimensional space, depending on the distribution of selected arms. Is it true that the algorithm is supposed to have free access to auxiliary feedback from any arm? If so, what presents the algorithm from extensively pulling all the arms to estimate their auxiliary feedback function, so as to get the most accurate estimation of those functions first? The algorithm assumes the reward noise is known and use this quantity to control when to use which estimator. But in practice, we do not know the actual value of reward variance, and how could we use the proposed algorithm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I do not find any concerns regarding the negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and constructive feedback. We have responded to each of the questions below. > ### *The means of using control variates to reduce the reward estimation variance is not a completely new idea, as the authors pointed out in the related work discussions. And the estimation techniques employed in this paper were also borrowed from prior works. $\ldots$ This to certain degree limits the novelty of this work.* Our work is motivated by Verma and Hanawal (2021). Please check the **Main contributions** part of the global rebuttal for our novel contributions. > ### *$\ldots$ in line 302, it states to maintain $r=2$, it needs to “getting one extra sample of auxiliary feedback in each round”. This seems to suggest the algorithm can pull another arm and only require the auxiliary feedback for free. If this is the case for algorithm design, it is an unfair advantage to the algorithm, comparing to the baseline bandit algorithms, as it can collect more information about the reward. Or in other words, why not further increase the ratio $r$ to better exploit this advantage? The EH variant of this algorithm further confirmed my understanding: it is assumed more observations about the auxiliary feedback are available.* Depending on the applications, a learner may have already collected auxiliary feedback (historical data) or may collect auxiliary feedback independent of the reward sample (e.g., cheap low-fidelity simulations) or even collect additional samples of auxiliary feedback without a reward sample, e.g., an online food platform will record the food delivery time (auxiliary feedback) for every order but may not get a user rating (reward) for each order. In our experiments, we maintain $r=2$ to validate our theoretical result, but getting an extra auxiliary sample does not imply that we will play that arm again but ignore the reward sample. Our proposed method uses all observations with reward samples to estimate the reward function $f$. In practice, one can use an appropriate variant of our proposed method depending on the need of their application (EH and BE variants shown in Fig. 2(a)-2(c)). However, it will be challenging to quantify the exact gain in variance reduction for these variants. > ### *The experiment settings were also overly simplified, $\ldots$. Given the algorithm’s theoretical performance depends on the dimension of auxiliary feedback, it is important to vary its dimension to investigate its practical impact.* Our work is a theoretical work that quantifies the performance gain achieved by a parameterized bandit algorithm using auxiliary feedback correlated with reward samples. It is common in bandit literature to measure the performance of bandits algorithms on synthetically generated data. Therefore, we have also validated our method using different bandit instances. We have also added an experiment result with larger $q (={1, 2, 3, 4, 5}$) when the auxiliary functions are known. For more details, please check the attached pdf in the global rebuttal. > ### *Another factor should be mentioned is that the paper only addresses the finite arm setting (especially in the experiments), though in contextual bandit problems infinite arms with potentially adversarial context arrival is believed to be a more general setting. $\ldots$.* Our experiment results related to OFUL as a baseline linear bandit algorithm deals with infinite arms setting (different results are shown in Fig. 2(a), Fig3(a), Fig3(b), Fig3(c), Fig4(a)). The main challenge for extending our method to more general settings and challenging environments is incorporating correlated auxiliary feedback with reward samples to improve reward function estimation. Our proposed method and techniques will be a baseline for future work in more challenging settings. > ### *Since we are using linear regression for $\beta$ estimation, why is $W_tW_t^\top$ guaranteed to be invertible?$\ldots$.* The randomness in each auxiliary feedback is due to IID Gaussian noise, which is independent of actions and other auxiliary feedback. It makes auxiliary feedback vectors independent of each other, and hence $W_tW_t^\top$ is invertible if there are more than $q$ auxiliary feedback vectors. When auxiliary feedback vectors are not independent, we can add a condition (e.g., Line 7 of OFUL-AF) for calculating $\boldsymbol{\hat\beta}_t$ only if $W_tW_t^\top$ is an invertible matrix. > ### *Is it true that the algorithm is supposed to have free access to auxiliary feedback from any arm? If so, what presents the algorithm from extensively pulling all the arms to estimate their auxiliary feedback function, so as to get the most accurate estimation of those functions first?* We have kept $r$ constant in Theorem 3 to quantify the variance reduction. However, this may not be the case in practice, e.g., having enough historical data of auxiliary feedback can be used to get a reasonable estimate of an auxiliary function. Consider another example of a cheap low-fidelity simulator where an algorithm can get as many auxiliary feedback samples as needed to have a good estimate of the auxiliary function at the start. However, using this may lead to a variant of our method for which quantifying the variance reduction may be challenging. > ### *$\ldots$ in practice, we do not know the actual value of reward variance, and how could we use the proposed algorithm?* We agree with the reviewer that we do not know the actual value of reward variance in practice. However, the assumption of a known (upper bound of) reward variance is common in many bandit algorithms, e.g., OFUL, Lin-UCB, UCB-GLM, and IGP-UCB. Further, our method can be extended to bandit algorithms like VOFUL and VOFUL2 for problems with unknown variance. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' explanations in the rebuttal, which help me better understand the technical details. Still a few points to follow up. > “In our experiments, we maintain $r=2$ to validate our theoretical result, but getting an extra auxiliary sample does not imply that we will play that arm again but ignore the reward sample. Our proposed method uses all observations with reward samples to estimate the reward function $f$.” I am a bit confused by this explanation: Given $r$ is the ratio between the samples used in estimation $g$ vs, $f$, my understanding of $r=2$ is that at each time we will have one observation of reward, and two observations of auxiliary feedback. But how could we get that extra sample for auxiliary feedback without pulling one more arm? Correct me if I misunderstood. > “We have also added an experiment result with larger $q (q=1,..,5)$ when the auxiliary functions are known.” Very glad to find this new result to verify the effectiveness of having more auxiliary feedback, though the results were obtained under the simplest setting of known auxiliary functions. In addition, the authors mentioned several times of leveraging historical data with auxiliary feedback and also tested its EH variant in the experiments; but in this case, we should compare with bandit algorithms that leverage offline data (the reward part), such as the following - Zhang, Chicheng, et al. "Warm-starting contextual bandits: Robustly combining supervised and bandit feedback." arXiv preprint arXiv:1901.00301 (2019). > “the assumption of a known (upper bound of) reward variance is common in many bandit algorithms” I totally agree that this is a common assumption in most bandit algorithm, but my original intent was to ask what’s its practical impact: in UCB-type algorithms, the assumed reward noise scales an algorithm’s regret; while in the proposed algorithm, it not only scales regret but also affects when the benefit of auxiliary feedback appears. Not sure if this complicates hyper-parameter tuning. --- Reply to Comment 1.1.1: Title: Auxiliary feedback and additional experiments with multiple unknown auxiliary functions Comment: Thank you for acknowledging our rebuttal. Here are our responses to your questions. > ### *... my understanding of $r=2$ is that at each time we will have one observation of reward, and two observations of auxiliary feedback. But how could we get that extra sample for auxiliary feedback without pulling one more arm? ...* We agree with the reviewer that we can get the extra sample for auxiliary feedback only after pulling an arm. To clarify, here is how we do it in our experiments. As the auxiliary function $g$ is known in our experiments (Lines 624-627 in Appendix, where $g$ is parameterized by $\theta_w^\star$), we can generate samples of auxiliary feedback (without reward) for randomly selected actions. Therefore, we have two types of observations -- one has a reward and associated auxiliary feedback for the selected action, while another only has auxiliary feedback for the random action. It leads to the question, *Is getting additional auxiliary samples without a reward sample even possible?* The answer is *Yes*. There are many real-life applications where auxiliary feedback is observed but not the reward for selected actions. For example, the fool delivery platform may not get a user rating for each order but can record the delivery time for every order. We have discussed such scenarios in the global rebuttal under **Availability of Auxiliary feedback**. Note that one can get extra samples of auxiliary feedback, but each sample may have an associated cost. For example, one can get multiple samples from a low-fidelity simulation model. However, each sample will have a computational cost (which may be very small compared to a high-fidelity simulation model). > ### *... new result to verify the effectiveness of having more auxiliary feedback, though the results were obtained under the simplest setting of known auxiliary functions.* We have also run additional experiments with multiple unknown auxiliary functions. We use a problem instance with $5$ auxiliary functions having different noise standard deviations (i.e., $\sigma = \{0.1, 0.08, 0.07, 0.06, 0.02\}$) and one unknown function with noise standard deviation of $0.1$. We chose the auxiliary function with the largest noise standard deviation for $q=1$ case, the auxiliary function with the two largest noise standard deviations for $q=2$ case, and so on. We set the number of rounds $(T)$ to $1000$ and $r=2$ for IS and MF sampling-based algorithms. We repeated all our experiments $100$ times ($50$ times for Lin-UCB-IS and Lin-UCB-MF) and showed the average cumulative regret as defined in Eq. (1) with a 95% confidence interval in the following table. |Algorithm\No. of AF|$q=1$|$q=2$|$q=3$|$q=4$|$q=5$| |-|-|-|-|-|-| |Lin-UCB-EH ($n_h=10$)|39.545$\pm$0.802|30.282$\pm$1.051|4.346$\pm$0.267|4.306$\pm$0.164|4.465$\pm$0.182| |Lin-UCB-BE ($\epsilon_g=0.1$)|32.259$\pm$0.871|21.612$\pm$1.048|9.414$\pm$0.83|11.984$\pm$1.401|18.586$\pm$2.022| |Lin-UCB-IS|44.816$\pm$0.729|43.693$\pm$0.805|94.558$\pm$1.208|159.855$\pm$2.282|185.856$\pm$2.485| |Lin-UCB-MF|44.816$\pm$0.729|43.615$\pm$0.747|93.961$\pm$1.305|161.287$\pm$1.816|190.807$\pm$1.756| As expected, regret decreases as q increases initially, but then it increases and even worsens than the baseline for Lin-UCB-IS and Lin-UCB-MF for $q>2$. For reference, the average cumulative regret incurred by Lin-UCB for the same problem instance was **50.75 $\pm$ 0.435.** > ### *In addition, the authors mentioned several times of leveraging historical data with auxiliary feedback and also tested its EH variant in the experiments; but in this case, we should compare with bandit algorithms that leverage offline data (the reward part)...* When auxiliary functions are unknown, we need a good estimate of these functions to get maximum benefit from auxiliary feedback. One possible way to get a good estimated auxiliary function is to use historical data of auxiliary feedback for its estimation. We have not considered the problems where the historical data of reward is also available. It is an interesting direction to pursue in future, and one can start with techniques introduced in the suggested paper (Zhang et al., 2019). > ### *... in the proposed algorithm, it not only scales regret but also affects when the benefit of auxiliary feedback appears. Not sure if this complicates hyper-parameter tuning.* As we use a high probability upper bound on the noise variance of hybrid rewards, this upper bound may exceed the noise variance $(\sigma^2)$ of rewards. To ensure the proposed algorithm performs better than the baseline bandit algorithm, we only use hybrid reward when its estimated upper bound of noise variance is smaller than $\sigma^2$. Therefore, hyper-parameter tuning needs to be adjusted when the algorithm switches from using rewards to hybrid rewards for estimating the reward function. *We hope our answers will further improve your opinion of our work. If you have additional questions, we would be happy to address them.*
Summary: This paper studies the parametrized bandit problem in which the learner observes auxiliary feedback together with the reward, and also correlated with the reward. It is motivated from the control variate approach in causal inference, the main difference is that in this paper, it extends the control variable theory to a setting where the "control variable" is parametrized by a function. It proposes a new bandit algorithm which replaces the original observed reward, with a version built upon the known/estimated control variate and studies the expected instantaneous regret compared with the original bandit algorithm. Strengths: - This paper studies a new bandit framework which differs from most of the prior work (such as side information, side observation, etc). In this new framework, the learner also observes the auxiliary feedback beyond the original reward. This setting is pretty relevant to lots of real-world scenarios, such as in food delivery platform, user rating might be revealed together with the delivery time, in recommendation platform, optimizing user like rate might be revealed together with the watch time, etc. - The paper is very well-written and easy to follow. It is built upon the classical control variate theory, and extends it to the known function (of the control variate), then further down to estimated function setting. The method is solid, and the expected instantaneous regret is provided to validate the soundness of the proposed approach. - I appreciate the additional efforts in the synthetic datasets to verify the empirical performance of the proposed method under various environment, such as the linear, linear contextual, non-linear contextual bandits, and study how the estimation of the control variate function as well as the correlation of the auxiliary feedback with the reward affect the final performance. These results facilitate the understanding. Weaknesses: - I am a little bit confused about the relationship of the variance reduction in terms of the number of auxiliary feedback being used. Under estimated $\beta$ and from Theorem 1, it seems the optimal number of auxiliary feedback being used is 1, which seems a little bit counter-intuitive, could the authors comment more on this aspect? - In Section 4, under estimated auxiliary functions, the sampling strategy for estimating the auxiliary function seems very computationally inefficient, and this leaves much smaller sample size for estimating the original f function compared with classical bandit algorithms, especially when the number of auxiliary functions is large. Remark 2 also does not make that much of sense, when $r_i$ goes to infinity where we use most of the sample in auxiliary function estimate. - The IS and MF sampling strategy are only listed in the method discussions, and for the experiments when $q=2$, it is hard to compare the pros and cons of them as they are equivalently being the same. Could the authors add more results for larger $q$ to showcase the effectiveness of the two sampling methods. A larger $q$ might also be helpful in understanding the effectiveness of having more control variates. - I am not sure if there is any public dataset available to showcase the effectiveness of the method in more real-world scenarios. The current ablation experiments facilitate the understanding of the method, but adding real-world experiments would be much more convincing. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - For Figure 2 (f), is it possible to add the performance for original Lin-UCB performance? - It would be good to have an ablation study w.r.t the number of auxiliary functions. - others listed in the Weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and suggestions. We have responded to each of the questions below. > ### *I am a little bit confused about the relationship of the variance reduction in terms of the number of auxiliary feedback being used. Under estimated $\beta$ and from Theorem 1, it seems the optimal number of auxiliary feedback being used is 1, which seems a little bit counter-intuitive, could the authors comment more on this aspect?* The variance reduction given in Theorem 1 depends on the auxiliary feedback via two terms: $\frac{t-2}{t-q-2}$ and $\rho^2$ (defined in Line 142). Setting $q=1$ will give the minimum value for $\frac{t-2}{t-q-2}$, but $\rho^2$ for $q=1$ will also be small as it only considers one auxiliary feedback, and hence maximum variance reduction will not be achieved. As we increase the number of auxiliary feedback, $\rho^2$ will increase, leading to more variance reduction. However, at the same time, the term $\frac{t-2}{t-q-2}$ will also increase, which can negate the variance reduction. Therefore, it is recommended to use a small number of auxiliary feedback (more in Remark 1). > ### *In Section 4, under estimated auxiliary functions, the sampling strategy for estimating the auxiliary function seems very computationally inefficient, and this leaves much smaller sample size for estimating the original f function compared with classical bandit algorithms, especially when the number of auxiliary functions is large. Remark 2 also does not make that much of sense, when $r_i$ goes to infinity where we use most of the sample in auxiliary function estimate.* In many real-world applications, a learner may have already collected auxiliary feedback (historical data) or collect auxiliary feedback independent of the reward sample (e.g., cheap low-fidelity simulations) or even collect additional samples of auxiliary feedback with no reward sample, e.g., an online food platform will not get a user rating (reward) for each order but will have a food delivery time (auxiliary feedback) for each order. Therefore, our proposed method uses all observations with reward samples to estimate the reward function $f$. In practice, one can use an appropriate variant of our proposed method depending on the need of their application (EH and BE variants shown in Fig. 2(a)-2(c)). However, it will be challenging to quantify the exact gain in variance reduction (as shown in Theorem 3 for specific case). > ### *The IS and MF sampling strategy are only listed in the method discussions, and for the experiments when $q=2$, it is hard to compare the pros and cons of them as they are equivalently being the same. Could the authors add more results for larger $q$ to showcase the effectiveness of the two sampling methods. A larger $q$ might also be helpful in understanding the effectiveness of having more control variates.* In control variate literature, it has been proven that IS and MF sampling strategies are asymptotically optimal (Gorodetsky et al., 2020), i.e., the variance reduction achieved by both strategies is asymptotically the same as if auxiliary feedback functions are known. Further, it is also shown that they both have similar empirical performance (Gorodetsky et al., 2020, Fig. 4), but these results are shown for a non-parametric offline setting. However, both strategies can be used for different problems: IS sampling strategy suits the problems in which different auxiliary feedback can be independently sampled, whereas MF sampling suits problems where auxiliary feedback can not be sampled independently. We will add experiments on both sampling strategies with larger $q$ in the future version of the paper. When the auxiliary functions are known, we have already added an experiment result with larger $q (={1, 2, 3, 4, 5}$). For more details, please check the attached pdf in the global rebuttal. > ### *I am not sure if there is any public dataset available to showcase the effectiveness of the method in more real-world scenarios. The current ablation experiments facilitate the understanding of the method, but adding real-world experiments would be much more convincing.* The main goal of this paper is to propose a method that exploits the auxiliary feedback correlated with reward samples to improve the performance of parameterized bandit algorithms and quantify performance gain (i.e., reduction in regret). It is common in bandit literature to measure the performance of bandits algorithms on synthetically generated data. We have also validated our method using several bandits instances. In the future, we will set up an elaborate experiment to demonstrate the effectiveness of our algorithms on real datasets. > ### *For Figure 2 (f), is it possible to add the performance for original Lin-UCB performance?* We have added the Lin-UCB in Figure 2(f). For more details, please check Figure 1 in the attached pdf in the global rebuttal. > ### *It would be good to have an ablation study w.r.t the number of auxiliary functions.* We have added an experiment result with more auxiliary functions ($q={1, 2, 3, 4, 5}$). For more details, please check Figure 2 in the attached pdf in the global rebuttal. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: Thanks for the authors' response, and I believe my initial score appropriately reflects the quality of this work. --- Reply to Comment 1.1.1: Title: Thank you for your review Comment: Dear Reviewer CBCn, Thank you for acknowledging our response and maintaining positive opinion of our work. Your feedback are tremendously valuable to us, and we will include our responses in the revised version to further improve our paper. Regards,\ Authors
Summary: This paper focus on the problem of parameterized bandits when extra auxiiliary feedback is available, which can be utilized to construct an unbiased reward estimator, which potentially shares smaller variance and thus algorithms based on such estimator can thus incur smaller regret. Experiments validates the estimator and corresponding algorithm. Strengths: - The paper is clearly-written, in which the problem statement, algorithms, theories and experiments are easy to follow. - Control variate theory is applied in this paper, which is suitable for i.i.d. random process in the variance reduction, which is one of the central topic in bandits, and is of independent interests beyond the topic. - The proposed algorithm is evaluated both empirically and theoretically. Weaknesses: - The improvement is expectable, as auxiliary feedback (AF) requires more information and computation. So this can be seen as a trade-off between extra information beyond rewards, extra computation and a constant order improvement in regret. - The hybrid reward only reduces variance when the AF is correlated with reward with same covariance $\sigma_{y,w}$. I'm worried this could not be true for most real-world applications. - Though extra AF is given, the experiments does not design how OFUL or LinUCB can utilize such AFs. One can figure another simplest way to use these extra information, for example, learn a model $f:\mathbf{R}^d \rightarrow \mathbf{R}$, which means to learn the function that maps the AF vector to the reward $y_t$, and thus OFUL and LinUCB can gain extra feedback $f(AF)$ to better estimate the reward. If the baseline just discard the extra AF, its' not fair. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. In the following, we have responded to your questions. > ### *The improvement is expectable, as auxiliary feedback (AF) requires more information and computation. So this can be seen as a trade-off between extra information beyond rewards, extra computation and a constant order improvement in regret.* We agree with the reviewer's observation that there is a trade-off between extra computations needed to incorporate auxiliary feedback in existing bandit algorithms and improvement in regret. However, regret improvement is only possible when the auxiliary feedback correlate with the reward, and the regret improvement is upper bounded by the correlation between the reward and its auxiliary feedback. > ### *The hybrid reward only reduces variance when the AF is correlated with reward with same covariance $\sigma_{y,w}$. I'm worried this could not be true for most real-world applications.* Yes, we assumed that the correlation between reward and its auxiliary is the same across all actions, i.e., same covariance $\sigma_{y,w}$ (Line 101). This assumption is reasonable as the source of randomness in reward and auxiliary feedback is zero mean Gaussian noise, whose variance is independent of the action (it is a common assumption in many bandit algorithms like OFUL, Lin-UCB, and UCB-GLM). However, this assumption can be violated in many real-world applications where Gaussian noise varies across actions, making covariance $\sigma_{y,w}$ vary across actions. The closest bandit setting to this problem is the bandit problem with heteroscedastic noise. Even though the definition of hybrid rewards used in the paper can be used, it may not give the best possible variance reduction. Therefore, the problem of varying covariance $\sigma_{y,w}$ needs to be systematically studied and can be an independent work, as we mentioned in Line 344. > ### *Though extra AF is given, the experiments does not design how OFUL or LinUCB can utilize such AFs. One can figure another simplest way to use these extra information, for example, learn a model $f: R^d \rightarrow R$, which means to learn the function that maps the AF vector to the reward $y_t$, and thus OFUL and LinUCB can gain extra feedback $f(AF)$ to better estimate the reward. If the baseline just discard the extra AF, its' not fair.* We do not know any parameterized bandit algorithm that can exploit the available auxiliary feedback. Therefore, our goal is to design a method that can exploit correlated auxiliary feedback to improve the performance (i.e., minimize regret) of the existing bandit algorithms. To achieve that, we use auxiliary feedback as control variates and extend the existing results from the control variate theory to our setting. We use vanilla OFUL and Lin-UCB as baselines to demonstrate the performance gain achieved by our approach. Let $f: R^d \rightarrow R$ learn the function that maps the auxiliary feedback vector to the reward $y_t$. Then, extra feedback $f(AF)$ may not get a better reward estimate in OFUL and LinUCB. Because first, the auxiliary feedback vector may only partially correlate with reward, e.g., user rating of food also depends on food taste and quality (which can not be observed by the platform) apart from the food delivery time. Second, getting an estimate for the reward from auxiliary feedback is impossible as auxiliary feedback is only observed with the reward. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying and authors clearly address my concerns, and I tend to keep my score. Good luck with the final decision. --- Reply to Comment 1.1.1: Title: Thank you for your review Comment: Dear Reviewer vUBt, Thank you for your positive feedback. We are glad that we were able to address all your concerns. We will include all our responses in the revised version of the paper. Regards,\ Authors
Rebuttal 1: Rebuttal: We thank all reviewers for their time and efforts in evaluating our paper and for their detailed comments and suggestions. We hope our answers to your questions will alleviate your concerns and further improve your opinion of our work. If you have additional questions, we would be happy to address them. Here, we address two main concerns and respond to your questions in individual rebuttals. ### **Main contributions:** The following are our main contributions: - **General setup:** Verma and Hanawal (2021) focus on a non-parameterized bandit setting, which assumes a finite number of actions and known auxiliary mean values. In contrast, we consider a more general bandit setting with a large (or even infinite) number of actions (i.e., parameterized bandits with contextual information). Further, we extend to a setting where unknown functions parameterize different auxiliary feedback. - **Control variates theory with parameterized function:** Control variate literature focuses on a non-parameterized control variate (auxiliary feedback in our problem), i.e., control variates are sampled from a fixed distribution. We first extend the existing control theory results to the problems where known functions parameterize the control variates (Section 3) and then extend to problems where functions parameterizing control variates are unknown (Section 4). Our key contribution is to design an unbiased reward function estimator using hybrid rewards (a combination of reward and its auxiliary feedback), which gives a maximum reduction in the estimator's variance. These contributions are themselves of independent interest in control variate theory. - **AFC bandit algorithm:** We introduce the notion of the Auxiliary Feedback Compatible (AFC) bandit algorithm. A bandit algorithm is an AFC bandit algorithm when certain conditions are satisfied (more details are in Definition 1). One can use hybrid rewards instead of only observed rewards in the AFC bandit algorithm, which leads to tighter confidence bounds and hence smaller regret. Our work has shown that the regret of AFC bandit algorithms can be improved by exploiting the auxiliary feedback. We hope the proposed method and techniques can be used for more challenging bandit problems with auxiliary feedback, e.g., bandit problems with heteroscedastic noise, non-Gaussian noise, adversarial contexts, and different environments. ### **Availability of Auxiliary feedback** Auxiliary feedback is easily available in many real-life applications. To illustrate that, we consider following different scenarios: - **Reward sample with auxiliary feedback:** In many problems, reward and its auxiliary feedback are observed jointly in each round. For example, consider a job schedular (Verma and Hanawal, 2021) that aims to assign different jobs to available servers. The job's service time (reward) depends on its size (auxiliary feedback) and other factors (e.g., load in assigned server). In such settings, reward and auxiliary feedback are observed jointly. Therefore, the platform can either use available historical data to estimate the mean job size or use observed auxiliary feedback to estimate auxiliary functions but ignore the associated reward samples (this is inefficient). - **Auxiliary feedback with no reward sample:** Consider an online food delivery platform that keeps track of the user's rating (reward) to recommend the best-rated restaurant and can also observe food delivery time (auxiliary feedback) for each order. The platform can observe the delivery time for every order but may only sometimes receive user ratings. In such settings, additional auxiliary feedback is available apart from historical data, which can be used for estimating the auxiliary feedback functions. Similar scenarios arise in online cab booking platforms and e-commerce platforms, as mentioned in the paper. - **Sampling auxiliary feedback without reward sample:** Consider a problem where getting samples from the high-fidelity simulation is very expensive but accurate. However, cheap low-fidelity simulations are available and correlated with expensive high-fidelity simulations. In such scenarios, one can independently collect sufficient samples to get a good estimate of low-fidelity simulations, and then the samples from cheap low-fidelity simulations can be used to minimize the variance of the estimator based on high-fidelity simulations. Pdf: /pdf/f3e9e1c2c1683ae0365321bb280d347fe022676f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity
Accept (poster)
Summary: This paper uses a transformer on neuroscience relevant task of spatial navigation, and shows when trained on both novel and familiar tasks simultaneously, place cell representations appear is different parts of the transformer depending on the current task – place cells in the feedforward net (post self attention) for familiar tasks, and in the self-attention layer for both novel and familiar tasks. The ability for the transformer to perform on familiar tasks is related to the activation function in the feedforward net. Strengths: Interesting question of different representations for different task distributions. Gets at hippocampal consolidation which is under-explored with models. Convincing simulation results. Weaknesses: 1) The NMDAR part of the paper is the least convincing, being that standard activation functions like ReLU seem to work just as well. Don’t think you can have sentences like ‘We find that NMDAR-like nonlinearity is essential for shifting short-term working memory long-term reference memory in transformers’. 2) The NMDAR impairment is essentially just making the activation function more linear, which of course will mean that it can’t effectively store long term memories. 3) To effectively make the NMDAR receptor point as a different activation function class, you’d need to test it on more standard ML tasks, rather that bespoke neuroscience tasks. 4) You have shown a nice potential neuro-ai link via NMDAR, but it’s currently just a potential relationship. It would need a lot more testing to make it concrete, e.g. showing that alpha=10 (the best performing model) is the regime that the brain operates in etc. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Presumably episodic memory is a better name than working memory, being that you’re saying it’s hippocampal related. See weaknesses for other questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As described in the weaknesses, the main limitation is about the interpretation of NMDAR and the relevance of the activation function for ML. Otherwise the paper is good. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your reviews and encouragement. The key __contributions__ of our work was to explore the resemblance between the NMDA receptor in the human hippocampus and activation functions in transformers. We are happy to see that the reviewer finds our results convincing. Here, we would like to address the reviewer’s __comments__ regarding 1) strong statement on the NMDAR-like nonlinearity 2) NMDAR impairment experiments 3) results on standard ML tasks 4) potential evidence of the relationship with the brain 5) terminology regarding working memory and episodic memory. These are important comments, and we believe that addressing them will improve the quality of our work. Please see our responses below: --- > __Response 1)__ The NMDAR part is the least convincing, being that standard activation functions work just as well. Don’t think you can have sentences like ‘We find that NMDAR-like nonlinearity is essential for shifting short-term working memory long-term reference memory in transformers’. As the reviewer noted, the standard activation function ReLU works fine in the transformer model. Our intention was to mention that standard activation functions widely used in deep models such as ReLU, GELU, and Swish can be viewed as a subset of the NMDAR-like activation function (Table 1). We will consider ways to convey this message while toning down our expression, including the following in the abstract: (Original) We find that NMDAR-like nonlinearity is essential for shifting short-term working memory to long-term reference memory in transformers (Revised) We find that NMDAR-like nonlinearity has a beneficial role in shifting short-term working memory to long-term reference memory in transformers --- > __Response 2)__ The NMDAR impairment is essentially just making the activation function more linear, which of course will mean that it can’t effectively store long term memories. Your understanding is correct that making the activation function more linear in feed-forward layers affects the ability of transformers to efficiently store long-term memory. This finding is consistent with neuroscience, which shows that removing the nonlinearity caused by Mg2+ gating affects long-term memory formation [1]. While using linear activation functions may demonstrate limitations in machine learning, this experiment is linked to neuroscientific discoveries. It bridges the gap between understanding the transformer as a memory consolidation model and biological memory processes, extending beyond traditional machine learning paradigms. [1] Miyashita et al., (2012). Mg2+ block of Drosophila NMDA receptors is required for long-term memory formation and CREB-dependent gene expression. Neuron, 74(5), 887-898. --- > __Response 3)__ To effectively make the NMDAR receptor point as a different activation function class, you’d need to test it on more standard ML tasks, rather that bespoke neuroscience tasks. Thank you for the suggestion. We agree that testing our NMDA function on standard ML tasks would be effective to get NMDAR points as a new function class. We have conducted additional experiments on language modeling and image classification tasks. Please refer to our __Global Response 1)__. --- > __Response 4)__ You have shown a nice potential neuro-ai link via NMDAR, but it’s currently just a potential relationship. It would need a lot more testing to make it concrete, e.g. showing that α=10 (the best performing model) is the regime that the brain operates in etc. As suggested by the reviewer, we calculated the corresponding α value in the physiological CA1 hippocampal neurons following real experimental values [1]. The calculated α was ranged between 0.01~0.2. |Term|Symbol|Typical Value|Units| |-|-|-|-| | Magnesium Ion Concentration | [ $\text{Mg}^{2+} ]$ |1| mM| | Physiological Temperature|T| 37 (310) |°C (K)| |Dissociation Constant at $V=0$ (mV) | $K_{\text{Mg}^{2+}}$ | 1 – 20 | mM | |Temperature Constant | $\beta$ | 0.062 | mV$^{-1}$ | These values are typical and can vary depending on the specific biological context, experimental conditions, and the model used. While previous research has shown that increasing the Mg2+ level in the brain with a specific compound such as MgT(magnesium-L-threonate) increases long-term memory formation [2], its effective concentration only increased about 15% from the baseline (expected maximal α in the brain to be ~0.23), possibly due to the physiological ion excretion process in humans. While the brain's effective α was lower than expected, nonetheless, we are grateful to the reviewer for this suggestion. [1] Kirson et al., (1999). Early postnatal switch in magnesium sensitivity of NMDA receptors in rat CA1 pyramidal cells. The Journal of Physiology, 521(Pt 1), 99. [2] Slutsky et al., (2010). Enhancement of learning and memory by elevating brain magnesium. Neuron, 65(2), 165-177. --- > __Response 5)__ Presumably episodic memory is a better name than working memory, being that you’re saying it’s hippocampal related. - Terminology: Episodic memory is concerned with the ability to recall specific events and the contexts in which they occurred (What-Where-When), whereas working memory retains information over short durations to perform cognitive tasks. Therefore we used 'working memory' to emphasize short-term retention and processing in the hippocampus, akin to the limited context in transformers. - Why Not Episodic Memory?: We avoided this term as our study lacked the time element (When) essential for episodic memory, focusing on What-Where components instead. - Future Work: We plan to investigate What-Where-When aspects of episodic memory in future research on transformer models. We appreciate the reviewer's feedback and hope this response clarifies our terminology choice. --- Rebuttal Comment 1.1: Title: Many thanks for your responses. Comment: Many thanks for your responses and I appreciate your calculation of alpha in real neurons. I still think this is a good paper, and I keep my original score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to read our responses and providing insightful feedback. The rebuttal process has helped us improve our paper, and we are extremely grateful to the reviewer for continued encouragement and support. We will do our best to incorporate the changes discussed in the revised version. Once again, thank you for the thoughtful feedback!
Summary: This paper builds on recent work connecting the hippocampus to the transformer architecture by introducing a new hippocampus-inspired activation function for the transformer's feedforward modules. In a toy navigation task, several empirical investigations show that the choice of this activation function has a large effect on the model's ability to store and recall information from longer-term memory, and that models using brain-like activation functions contain neurons with place cell-like firing patterns. Strengths: * This paper continues an interesting research direction -- exploring the connections between the hippocampus and the transformer architecture. This is exciting work that is relevant to both machine learning and neuroscience. * The empirical analysis is presented well with many attractive and easy-to-interpret figures, seems quite thorough, and largely supports the paper's conclusions. Weaknesses: * The paper presents a convincing argument that the nonlinearity in the feedforward module of a transformer is important for forming and recalling certain types of memories, and that the brain-inspired nonlinearity is among the best. However, it provides essentially no understanding (even intuitively) of why this should be the case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Regarding the above weakness: How might the choice of nonlinearity give rise to the effects (reference memory error rate, place cell score, etc) documented here? * Why is a recurrent embedding used for actions? What happens if a different embedding is used? * The definition of place cell score is crucial to the paper and should be in the main body, I think. On a related note, I find the definition given in the appendix difficult to understand. In particular, given how the auxiliary graph $\mathcal G$ is constructed, it seems to me that every vertex will be a direct descendant of $node_k$ since $node_k$ has the highest firing rate. How this (a) measures sensitivity of a neuron to a specific location or (b) resembles the peak method used to identify place cells in the neuroscience literature should be explained more carefully. A figure might also be helpful here. * The discussion on lines 318-324 notes that the results here agree qualitatively with experimental results from neuroscience. It would be nice to have a figure summarizing these findings, and perhaps comparing against this paper's results, Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that the reviewer recognizes our __contributions__ in 1) connecting the hippocampus to the transformer architecture 2) the convincing empirical analysis which supports the transformer’s longer-term memory formation, and 3) unveiling place cell-like firing patterns in a feed-forward layer, 4) with many attractive and easy-to-interpret figures. Here, we would like to address the reviewer’s __comments__ regarding: 1) Nonlinearity's importance in memory formation. 2) Choice of nonlinearity and its effects. 3) Recurrent embedding for actions and alternatives. 4) Place cell score definition and explanation. 5) Summary figure linking findings to neuroscience. We believe addressing these comments will enhance our work. Please see our responses below: --- > __Response 1)__ It provides essentially no understanding (even intuitively) of why this should be the case. Based on the reviewer’s comment, we find it important to give a more intuitive evidence for why the proposed nonlinearity is effective. To provide a better understanding of our work, we present both analytical and empirical results indicating that increasing alpha corresponds to increasing sparsity. Please refer to __Global Response 2)__. --- > __Response 2)__ How might the choice of nonlinearity give rise to the effects (reference memory error rate, place cell score, etc) documented here? The high sparsity properties of NMDA function may contribute to the increased score of place cells (__Figure S13 in attached pdf for rebuttal__). In the case of Transformers, enforcing neuron activation sparsity in MLPs has been found to improve the interpretability or selectivity of a higher percentage of neurons [1]. This evidence could explain why the place cell score increases when 𝛼 is increased. Among the various activation functions, NMDA with 𝛼=10 has the highest Gini index. A comparison of the Gini index among activation functions is shown in the figure. We will include this result in Appendix. [1] Elhage et al., (2022). Softmax linear units. Transformer Circuits Thread. --- > __Response 3)__ Why is a recurrent embedding used for actions? What happens if a different embedding is used? A recurrent positional embedding for actions was used to capture the temporal dependencies in the agent's actions. The model can encode information about the agent's previous actions and incorporate it into the current position prediction by using recurrent positional embeddings. This can be particularly useful in tasks where the sequence of actions is important, such as navigation. If a different embedding method is used, such as a non-recurrent learnable positional encoding, the model may still be able to learn the task. In Appendix A.5, we conducted an experiment that may be relevant to the reviewer's question. In this experiment, we disrupted the embedding layers to make them non-recurrent, effectively preventing them from retaining previous action information in embeddings. The results indicate that working memory error and reference memory error increased significantly (see Fig. 3a and Fig. S3a). However, the behavior observed in this experiment is similar to the trend seen when increasing 𝛼 of NMDA$_{\alpha}$ (see Fig. S3b). These findings imply that, while path-integrated information from recurrent positional embedding is useful for learning the spatial structure of the map, it is not required nor essential to predict the unvisited node. This finding supports the idea that working memory is crucial for memory consolidation and that disrupting it can cause impairment in reference memory. --- > __Response 4)__ The definition of place cell score is crucial to the paper and should be in the main body. … A figure might also be helpful. Thank you for the in-depth examination of our place cell section. We agree that giving definitions for the place cell score is important. We had tried various writing styles, including having the definition in the main body. In the current version, we tried to go directly to main results. We would be happy to incorporate this feedback and rearrange our content. As the reviewer suggested, we believe including a schematic figure to explain how an auxiliary graph is constructed would help readability. As shown in __Figure S1 in attached pdf for rebuttal__, not all nodes are direct descendants as we constructed from the 2d grid map. We are grateful for the reviewers' suggestions to improve our work. --- > __Response 5)__ It would be nice to have a figure summarizing that the results here agree qualitatively with experimental results from neuroscience. We will include a summary figure that connects the transformer (this work) and the brain (experimental findings). Please see our __Figure S11 in attached pdf for rebuttal__. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. The new figures are very helpful and I hope they will be included in future versions of this paper. In light of this, I'm increasing my score (5 to 6). --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to read our responses and providing insightful feedback. The rebuttal process has helped us improve our paper, and we are happy to hear that the newly added information is helpful. We are incredibly grateful to the reviewer for raising the evaluation score, and we will do our best to reflect the corresponding changes in the revised version. Thank you so much!
Summary: Many recent papers have shown a relationship between Transformers and biological structures, particularly the hippocampal formation. This work demonstrates that the types of non-linearities provided by NMDAR dynamics (which have known biological importance) can be beneficial in a Transformer architecture for a toy what-where memory/navigation task. Strengths: This paper deepens the connection between Transformers and biology, which helps both the neuroscience and machine learning communities in understanding these systems. It may also help improve Transformer models, which are already very popular and performant but may be further improved by inspiration from neuroscience. Weaknesses: The values of $N$ used in Figure 3 suggest that in this case there is a ceiling effect, i.e., once N is sufficiently large, the benefit of the NMDA-like non-linearity becomes negligible. Further, while error bars are included, I could not find the statistical testing to demonstrate differences between the groups. I think it is important to do such tests to measure the size and significance of any effects. Although the what-where task is clearly neuroscientifically-inspired and has been used in past similar studies, it is not clear whether anything contained here will generalise to other, more ecologically-valid tasks from neuroscience or more naturalistic real-world data from machine learning. The authors should test on more complex tasks/data. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 1. Different activation functions have different computational footprints. How does the computational footprint of the proposed activation function compare to others and is the trade-off worth the performance increase? 2. Line 318. Why is this surprising? 3. While TEM has been popularised and cited widely, and is discussed here, how is it considered 'state-of-the-art' when it is essentially a very fancy Markov chain model? Or do the authors argue that that's what hippocampus is/does? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: Limitations need more discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer finds our __contributions__ regarding 1) the connection between transformers and hippocampal formation, including the incorporation of NMDAR nonlinear dynamics, and, 2) its potential benefits for both neuroscience and machine learning communities. Here, we would like to address the reviewer’s __comments__ regarding 1) ceiling effect via increasing N and lack of statistical testing in Figure 3 2) generalization to more complex or real-world tasks 3) the computational footprint of the proposed activation function 4) clarification on why is surprising in line 318 5) TEM's characterization and relevance to the hippocampus. These are great comments, and we believe that addressing them will improve the quality of our work. Please see our responses below: --- > __Response 1)__ Values of N used in Fig 3 suggest there is a ceiling effect. If N is sufficiently large, the benefit of the NMDA-like non-linearity becomes negligible? It is important to do statistical testing to demonstrate differences between the groups to measure significances. Thank you for the feedback. We agree that conducting statistical tests of significance will provide more convincing evidence. Regarding this comment, we ran additional experiments to the level where statistical analysis was possible. We used a nonparametric statistical test, i.e., Mann-Whitney U test, across all groups in Figure 3. This result is included in provided __(Figure S7 in attached pdf for rebuttal)__. We will include this result in our Appendix. As shown in Figure S7d-f in attached pdf for rebuttal, the overall significance level of NMDA vs others increases (i.e., yellow color means lower $p$-values). We acknowledge we are unable to fully address the reviewer’s concern about the ceiling effect by increasing N. However, based on our current statistical test results, as suggested by the reviewer, NMDA-like non-linearity appears to be effective even in larger N (up to 64) conditions. --- > __Response 2)__ It is not clear whether anything contained here will generalize to other. The authors should test on more complex tasks/data. Thank you for the suggestion. Testing the proposed activation function over more complex tasks and datasets will strengthen our findings. We have conducted experiments on language modeling and image classification tasks. Please refer to __Global Response 1)__. --- > __Response 3)__ How does the computational footprint of the proposed activation function compare to others and is the trade-off worth the performance increase? The computational footprint of the NMDA function is comparable to that of Swish and requires less computation than GELU. Under the JIT compile feature in PyTorch, the computation speeds of Swish, NMDA, and GELU are essentially the same on an actual GPU. ReLU offers low computational cost and high memory efficiency in large-language model training, but it is not widely used in language models due to its inefficiency in training stability and learning speed. GELU is widely used, despite its higher computational demands, because of its stable training process and rapid reduction in training loss. While we did not observe the stability issues associated with ReLU in our work, considering these trade-offs into account when evaluating the proposed activation function will be important. We appreciate the reviewer's insight and will consider these factors in future studies. --- > __Response 4)__ Line 318. Why is this surprising? The role of NMDAR in the CA1 region of the hippocampus is known to be vital for long-term memory formation in neuroscience, leading to discoveries such as place cells (i.e., a finding that won a Nobel Prize). We were surprised to discover parallels between this process and transformers, a leading architecture in deep learning. We observed similarities such as the use of NMDAR-like nonlinearity, the emergence of place cells in feed-forward networks (FFNs), and the effect of linear activation functions on long-term memory. To clearly convey these points, we have newly added a figure to illustrate these connections in __Figure S11 in attached pdf for rebuttal__; we will include this figure in Appendix. --- > __Response 5)__ While TEM has been popularised and cited widely, and is discussed here, how is it considered 'state-of-the-art' when it is essentially a very fancy Markov chain model? Or do the authors argue that that's what hippocampus is/does? Thank you for your insightful comment regarding the classification of the Tolman-Eichenbaum Machine (TEM) as a 'state-of-the-art' model. The navigation problem we addressed is inherently non-Markovian. However, the TEM allows us to render the problem Markovian by including a memory M, an integral part of the model that encompasses location-sensory conjunctions. The TEM is based on the hypothesis that hippocampal cells encode these conjunctions (p = flatten($x^{T} * g$)) and that memories are rapidly stored in weights M through Hebbian learning ($M = M + p^T * p$). This approach not only explains various neural representations in spatial tasks but also extends to non-spatial tasks. The ability of TEM to represent abstract spatial relationships via sensory input (lateral entorhinal cells) and abstract locations (medial entorhinal cells) uniquely positions it as a model capable of explaining complex neural phenomena. Its ability to store memories via simple Hebbian learning also underscores its novelty. We referred to TEM as the 'state-of-the-art' model, not only because of its popularity but also because of its comprehensive ability to mimic hippocampal synaptic potentiation, aligning closely with observed neural representations in both spatial and non-spatial tasks. We hope this clarifies our rationale behind classifying the TEM as 'state-of-the-art' and we welcome any additional comments. (We will also consider giving it a more neutral name.) --- Rebuttal Comment 1.1: Title: Response Comment: Responses 4 and 5 re the least convincing. The authors definitely should not refer to TEM as 'state-of-the-art'. I am also uncovinced about the practicality of this contribution to practical probelsm, i.e. the performance-cost tradeoff. Therefore, claims of this nature should also be revised. I am therefore downgrading my score slightly based on the authors responses. --- Reply to Comment 1.1.1: Comment: We regret hearing that our responses were unsatisfying. We would like to ask whether the reviewer meant to change the score from "borderline accept (5)" to "weak reject (4)" given the comment “I therefore downgrade my score slightly”. Instead, the reviewer has reduced the score to "reject (3)," and we sincerely request the reviewer to reconsider our work. Given that the response period is still open, we would like to have the opportunity to address the reviewer's feedback better. As the reviewer initially mentioned, our work contributes to connecting Transformers and biology, which helps the neuroscience and machine learning communities understand these systems. We consider this to be an important effort. --- **Regarding TEM (feedback 5)**, we would like to clarify that we do not consider this model to fully represent the hippocampus. TEM is just one of the models that explain the generalizability of the hippocampus and capture the relational properties of the states, a class of model related to successor representation (SR) [1] (similar to the SMP model [2]). To the best of our knowledge, the recent work, TEM-t, is probably the only work that bridges the powerful transformer model with the hippocampus, and our original intention was to highlight this. Thus, we agree that our sentence may have misled the reviewer. As stated in our previous response, we will change the description of TEM to "a recent model that bridges with transformer". As an interdisciplinary team of computer scientists and neuroscientists, we recognize that our description of related work may not have been complete. We'd be happy to accommodate any further suggestions on the literature. [1] Dayan, P. Improving generalization for temporal difference learning: the successor representation. Neural Comput. 5, 613–624 (1993). [2] Uria, B. et al., The spatial memory pipeline: a model of egocentric to allocentric understanding in mammalian brains. Preprint at bioRxiv (2020). --- **Regarding feedback 4**, "Line 318. Why is this surprising?" To our knowledge, no prior studies have linked the transformer's capability for long-term memory in its feed-forward layer to established neuroscience observations. Our study shows that the transformer model aligns with known experimental findings about the role of NMDAR in the hippocampal CA1 memory formation process. To convey this point more effectively, we tried to explain this exciting viewpoint with a schematic figure in the rebuttal process. We kindly ask the reviewer to offer more context for the question. 1. **Conceptual Bridge between Neuroscience and Machine Learning**: Transformers are a deep learning architecture, originally designed to process sequential data in tasks like natural language processing. The connection between these computational models and biological processes in the brain is not inherently obvious. Thus, finding evidence that Transformers can be utilized to model memory consolidation and the dynamics of NMDAR in the hippocampus represents an unexpected bridge between two distinct domains. 2. **Correspondence with Specific Biological Mechanisms**: The fact that the non-linear dynamics of NMDAR and associated parameters (such as Mg$^{2+}$ gating) have specific correspondences in the transformer model (through the activation functions and the modulation of $\alpha$) is a new finding. This correspondence is not merely a superficial similarity but appears to have functional implications for modeling memory formation and place cell representation, which are critical processes in biological neural systems. 2. **Consistency with Prior Experiments**: Our findings aligned qualitatively with previous NMDAR impairment experiments in neuroscience and surprised us. It strengthens the connection between the computational model and biological reality and suggests potential insights for both fields. In the context of the existing literature, the ability of a computational model to replicate these known biological effects is unforeseen. 4. **Potential for Practical Applications**: By deepening the understanding of both Transformers and the underlying biological processes, our findings can further lead to improvements in machine learning algorithms inspired by neuroscience and potentially even insights into biological processes informed by computational models.
Summary: This paper investigates the resemblance between the NMDA receptor (NMDAR) in the hippocampus of the human brain and the activation functions used in the transformer architecture (e.g., ReLU, GELU). Then, this paper presents a new activation function that exhibits similarities to NMDAR. It demonstrates that by adjusting the hyperparameter associated with this activation function, the memory capabilities of transformers can be fine-tuned. A 2D grid navigation experiment with transformers is investigated to examine the working memory and the reference memory. Strengths: 1. This paper explored the famous transformer model from a neuroscience perspective, by drawing connections with the hippocampus in the human brain. 2. This paper first investigates the similarity between the NMDAR in the hippocampus and the activation function such as ReLU and GELU in transformer models. 3. The paper in general is well-written and easy to follow. Weaknesses: 1. Although the proposed idea is interesting and neuro-inspired, the technical contribution seems limited (for ML venues). Based on my understanding, Sec 2.2 is a theoretical review of existing work, and the derivation of the NMDAR activation function in Sec 2.3 (w/. A.3 in appendix) is in general straightforward given previous work. 2. The empirical results on the 2D navigation task seem promising, but it may be worthwhile to explore more general tasks that transformer models are typically applied to, e.g., language tasks, to better validate the efficacy of the proposed activation function. 3. In Figure 3 (a), the result shown in the upper right subfigure (test) demonstrates that the proposed method could not predict the unvisited nodes in the novel map. Are there more detailed explanations for the phenomenon? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The technical contributions of this work could be better elucidated. 2. More practical empirical studies e.g., on real-world large-scale datasets or more general and complicated tasks will be more interesting and helpful. My final score will largely depend on the rebuttal and the discussion with other reviewers. I am willing to increase my score if the concerns are adequately addressed. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I did not find potential negative societal impacts in this work. See the “Weaknesses” section for my concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your reviews. The key __contribution__ of our work was to explore the resemblance between the NMDA receptor in the human hippocampus and activation functions in transformers. We appreciate that the reviewer finds our work to be a novel approach introducing a new activation function that reflects the properties of NMDAR and highlights its potential to fine-tune memory capabilities in transformer models. Here, we would like to address the reviewer’s __comments__ regarding 1) technical contribution to ML venues 2) tests on more general and practical tasks, and 3) more detailed explanations for specific results. These are important comments, and we believe that addressing them will improve the quality of our work. Please see our responses below: --- > __Response 1)__ The technical contribution seems limited for ML venues. The technical contributions of this work could be better elucidated. The following items demonstrate our technical contributions in terms of improvements over previous work (items 1 & 2) and new experiments (items 3 & 4). 1. We designed a spatial navigation task that allows the intuitive separation of short-term memory and long-term memory performance. 2. We integrated standard activation functions (ReLU, GELU, Swish) into NMDA$_{\alpha, \beta}$ (Table 1) and proposed additional hyperparameter space, $\alpha$, which can potentially be beneficial for ML tasks. 3. (Newly added) We conducted additional experimental tasks for standard vision and language models (ViT and GPT2). See details in __Gloabl Response 1)__ and __Figure S12 in attached pdf for rebuttal__. 4. (Newly added) We provide additional sparsity analysis for the NMDA$_\alpha$ activation function. This includes the mathematical intuition and empirical results on how increasing $\alpha$ corresponds to an increase in sparse activation in the FFN population. We believe that this analysis will provide a better understanding of our NMDA nonlinearity to ML venues. See details in __Global Response 2)__ and __Figure S13 in attached pdf for rebuttal__. --- > __Response 2)__ It may be worthwhile to explore more general tasks that transformer models are typically applied to, e.g., language tasks, to better validate the efficacy of the proposed activation function. More practical empirical studies e.g., on real-world large-scale datasets or more general and complicated tasks will be more interesting and helpful. We agree that exploring more general tasks with our proposed activation function would be more interesting and helpful. Therefore, we have conducted additional experiments on language and image classification tasks. Thank you for the suggestion. Testing the proposed activation function over more complex tasks and datasets will strengthen our findings. We have conducted the following set of additional experiments on language and image classification tasks. For the result, please refer to __Gloabl Response 1)__ and __Figure S12 in attached pdf for rebuttal__. --- > __Response 3)__ In Figure 3 (a), the result shown in the upper right subfigure (test) demonstrates that the proposed method could not predict the unvisited nodes in the novel map. Are there more detailed explanations for the phenomenon? This is a good catch. It is due to the transformer model’s design constraint. The problem in our proposed model is caused by the fixed length of the context window (c=64), as depicted in Figure 2b. Any node that has not been visited by the agent in the previous 64 steps is classified as unvisited, which means it is outside the current context window. The key difference between our model and the transformer model is the recurrent positional embedding, which encodes the previous action sequence information rather than the preceding sequence of observations. Because of the context window size, our model is unable to access the sensory observations of unvisited nodes via the self-attention mechanism. We will try to explain this limitation better. --- Rebuttal 2: Title: Responses to Authors Comment: I appreciate the responses from the authors and found the newly added figures from the global response helpful. My second question (More practical empirical studies) is addressed and I am willing to increase my score accordingly. --- Rebuttal Comment 2.1: Comment: We sincerely thank the reviewer for taking the time to read our responses and providing insightful feedback. The rebuttal process has helped us improve our paper, and we are happy to hear that the reviewer finds the newly added information helpful. We are extremely grateful to the reviewer for raising the evaluation score, and we will do our best to reflect the corresponding changes in the revised version.
Rebuttal 1: Rebuttal: We are grateful to the reviewers for their insightful comments on our study. All reviewers recognized our contribution to investigating NMDAR-like nonlinearity in the transformer's feed-forward network, which will be beneficial for both communities of ML and neuroscience. Reviewers find our work well-written and easy to follow (__Reviewer TzRp__) many attractive and easy-to-interpret figures (__Reviewer xzxf__). Reviewers also think our empirical analysis results support our paper conclusion well (__Reviewer FYwU & xzxf__). During the rebuttal period, we made additional figures and conducted additional experiments for addressing the reviewers’ questions and concerns. Highlights include experiments on 1) standard machine learning tasks on ViT and GPT2 (Figure S12), 2) sparsity analysis on activation functions (Figure S13), 3) improved illustrations of place cells (Figure S1), 4) statistical significance tests (Figure S7), 5) and clear comparisons between the hippocampus and the transformer model (Figure S11). These efforts demonstrate our commitment to addressing the reviewers' questions and enhancing the connection between neuroscience and machine learning. ___Please find our attached PDF regarding our response.___ Thank you. --- > __Global Response 1)__ standard machine learning tasks on ViT and GPT2 Testing our NMDA activation function over more complex tasks and datasets will strengthen our findings. We have conducted the following set of additional experiments on language and image classification tasks. __1. Language modeling with GPT2__ We have tested the NMDA$_\alpha$ on the GPT2 model with the Open Web Text dataset. Our analysis indicates a slight performance improvement. Please find this result of loss curves from __Figure S12 in attached pdf for rebuttal__. __2. Image classification tasks with ViT__ We also tested the NMDA$_\alpha$ on the ViT model (3 trials for each condition) and found an increasing tendency of performance (although statistically non-significant). The table below reports top-1 test accuracies for the CIFAR-100 and the TinyImageNet datasets. | **Dataset** | GELU | NMDA$_{\alpha=10}$ | NMDA$_{\alpha=0}$ | |--------------------|----------|-------------------------------|-----------------------------| | **CIFAR100** | $69.92\pm0.34$ | $70.27\pm0.47$ | $49.91\pm0.58$ | | **TinyImNet** | $54.90\pm0.45$ | $55.72\pm0.03$ | $40.36\pm0.52$ | We would like to mention that although these results are promising, we are not conclusive in stating that the NMDA function outperforms GELU on all real-world tasks. Future work will include testing our NMDA function on different model sizes of GPT and other language models, as well as testing the pre-trained models on downstream NLP tasks such as reading comprehension, question answering, common sense, MMLU, and BIG-bench. --- > __Global Response 2)__ sparsity analysis on activation functions To provide a better understanding of our work, we present both analytical and empirical results indicating that increasing alpha corresponds to increasing the sparsity of the feed-forward layer neuronal population, which supports our last paragraph of results section 3.2 (lines 246-248), and third paragraph of the discussion section (lines 309-311). __Mathematical intuition__: Given $ \text{NMDA}_{\alpha} = x / (1+\alpha e^{-x}) $, we can rewrite this function as following: $ \text{NMDA}_\alpha=x \cdot \frac{1}{1+e^{-(x-c)}}, \\ \text{where,} \alpha=\exp{c}. $ From the above expression, increasing $\alpha$ corresponds to shifting the sigmoid function towards a positive direction by increasing c, which can be interpreted as increasing the threshold of information gating mediated by the sigmoid function. As a result, an increase in $\alpha$ may cause the sparsity of the downstream population activities. __Empirical result__: To confirm the above mathematical intuition, we measured the Gini index [1] to determine the sparsity of the feed-forward layer activities. For each input sequence, we calculate the Gini index $ G = \frac{{\sum_{i=1}^{K} \sum_{j=1}^{K} |x_i - x_j|}}{{2K^2 \bar{x}}} $ where $x_i$ is the i-th neuron’s activation value and $ \bar{x} = \sum_{i=0}^K x_i / K$ is the mean of the activation values in the feed-forward layer ($ K=2048 $ is the total number of neurons in a feed-forward layer). The Gini index ranges from 0 to 1. When only a few neurons have high activation values and others have low values, the Gini index is close to 1. On the other hand, when most neurons have homogeneous activation values, the Gini index is close to 0 (1 = absolute sparsity, 0 = all activations equal). As shown in __Figure S13 left in attached pdf for rebuttal__, increasing $ \alpha $ in NMDA causes the Gini index to rise. This increase suggests that population activities become more heterogeneous, resulting in a heavy-tailed distribution. Furthermore, the Gini index of NMDA with an alpha value of 10 is greater than that of the GELU activation function (represented by the dashed line). These results imply that $\text{NMDA}_{\alpha}$ may improve long-term memory formation by increasing the sparsity of activations in the feed-forward layer. Prior studies [2] investigated the generalization performance and sparsity in overparameterized models. Although the mechanism underlying the emergence of sparse representation in large models is not fully understood, it is worth noting that overparameterized models with sparse representations are simpler models than those with dense representations. The discussion above will be included in the Appendix. [1] Miller et al., (ICLR 2021). Divisive Feature Normalization Improves Image Recognition Performance in AlexNet. [2] Li et al., (ICLR 2022). The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers. Pdf: /pdf/dc4ecdd99e246fcaef19b8f147e313fa71de2492.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization
Accept (spotlight)
Summary: This paper proposes a first-order optimization method for convex optimization that requires gradient computations and matrix-vector products. Based on recent advances on quasi-newton methods, this method converges at a rate that matches the optimal rate ($1/k^2$) when $k = \Omega(d)$ and improves upon the optimal rate when $k >>d$. In order to achieve this rate the authors consider a quasi-Newton method with a backtracking line search and a projection-free online algorithm to approximate the hessian of the objective. Strengths: This paper has several strengths: - the rate proposed improved upon the best-known rates. In particular, it is faster than the convergence rate of NAG when the number of iterations is larger than the dimension. - The authors really try to give the main steps of the results in the main paper. - While the paper is quite technical, I found that the contributions are well-presented, Weaknesses: I found the paper could be slightly improved: - The experiments could support the theory more: - The logistic loss is strictly convex. Thus it seems that the rate in the experiments should be super-linear. It is not clear in the experiments if we are observing these super-linear rates or if we see sublinear convergence. I would instead try the log-sum exp loss (logistic regression for multi-class), which is not strictly convex - I would be curious to see if, actually, one can find a practical loss for which one can observe the sublinear rate $1/k^{2.5}$. - A plot with time as a x-axis could illustrate the fact that in many situations, matrix-vector product computations are not the bottleneck. - Some details on the Online method are missing in the main paper: - The subroutine could be described (at least intuitively) - The main result about the regret bound could be stated (basically logarithmic regret) - In theorem 1 the probability p does not appear in the bound. I understand it only appears in the matrix-vector product complexity but it makes the statement quite odd. I would suggest the authors to at least mention it in a remark. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can you develop why you cannot use standard projection-free no-regret methods? such as New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees Dan Garber and Ben Kretzu I assume it would not be straightforward since your notion of regret is different (dynamic regret) but I would be curious to know how your method compares with respect to this related work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: They authors mostly do address the limitation of their work. They mention that their algorithm requires many matrix-vector products and, thus, their rate is faster than NAG if the gradient computation is considered the bottleneck. They mention that their algorithm is slower than BFGS in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. We address your concerns below. --- **Q1 It is not clear in the experiments if we are observing super-linear rates or if we see sublinear convergence. Try the log-sum exp loss, which is not strictly convex.** **A1** This is a very good observation. We conducted new experiments on the log-sum-exp function $f(x) = \log( \sum_{j=1}^n e^{\langle a_j, x\rangle - b_j}) $, where the dimension is $d=150$, the number of samples is $n=150$, and we follow the procedure in [1] to generate $\\{a_j\\}$ and $\\{b_j\\}$. As shown in Fig. 3 in the attached pdf, we observe that both NAG and A-QNPE converge at a sublinear rate. --- **Q2 I would be curious to see if one can find a practical loss for which one can observe the sublinear rate $O(1/k^{2.5})$.** **A2** Thanks for raising this point. So far we have not observed the sublinear rate $O(1/k^{2.5})$ in all of our experiments. Indeed, our theoretical result is pessimistic by nature, and the empirical performance of A-QNPE can be better than what the theory predicts. For instance, in our new experiment, Fig. 3(c) indicates that on this specific problem NAG converges at the rate of $O(1/k^{3})$, while A-QNPE converges at the rate of $O(1/k^5)$. --- **Q3 A plot with time as a x-axis.** **A3** Thanks for your suggestion. We have included additional plots in terms of the running time in the attached pdf. All experiments are conducted using MATLAB R2021b on a MacBook Pro with an Apple M1 chip and 16GB RAM. Specifically, in Fig. 1 of the attached pdf file, we consider the logistic regression problem $f(x)= \frac{1}{n}\sum_{j=1}^n \log(1+e^{-y_j \langle a_j, x\rangle})$ as described in the paper, where the dimension is $d = 150$ and the number of samples is $n=2000$. As shown in Fig. 1(c), if we are seeking a solution of high accuracy, our method can require less running time than NAG due to its faster convergence. In addition, in Fig. 2 of the attached pdf file, we consider the log-sum-exp function $f(x) = \log( \sum_{j=1}^n e^{\langle a_j, x\rangle - b_j}) $, where $d=150$, $n=150$, and we follow the procedure in [1] to generate $\\{a_j\\}$ and $\\{b_j\\}$. In this case, Fig. 2(c) shows that the run-time performance of A-QNPE is comparable to that of NAG. Finally, we would like to emphasize that the primary objective of our numerical experiments is to validate our theoretical discovery that A-QNPE can attain a faster convergence rate than NAG. With a more meticulous implementation, there is potential to enhance the practical efficacy of our method, and we have deferred this for future investigation. --- **Q4 Some details on the Online method are missing in the main paper.** **A4** Thanks for raising this point. Due to the page limit, we have to relegate the details of our online learning algorithm to the appendix (Sections C.3 and C.4). In the revision, we will describe the subroutine in more detail and present the key regret bound (Lemma 12 in Section D) in the main paper. --- **Q5 In theorem 1 the probability $p$ does not appear in the bound.** **A5** Thanks for raising this point. Note that our algorithm relies on the SEP oracle in Definition 2, which succeeds with a certain probability. To prove Theorem 1, we first show that, with probability at least $1-p$, every call of the SEP oracle is successful during the execution of our algorithm. Conditioned on this event, we proceed to prove our convergence rates, and thus they do not depend on $p$. Following your suggestion, we will add a remark on this in our revision. --- **Q6 Can you develop why you cannot use standard projection-free no-regret methods?** **A6** This is a good point. We note that standard projection-free no-regret methods, such as online Frank-Wolfe [2], are based on a linear minimization oracle (LMO). Unfortunately, implementing the LMO in our setting is also computationally expensive. Specifically, the constraint set in our online learning problem is given by $\mathcal{Z} = \\{\mathbf{B} \in \mathbb{S}^d_+: 0 \preceq \mathbf{B} \preceq L_1 \mathbf{I}\\}$. Consider the linear minimization problem $\min_{\mathbf{X} \in \mathcal{Z}} \langle \mathbf{A}, \mathbf{X} \rangle.$ We first need to compute the eigendecomposition $\mathbf{A} = \mathbf{V}\mathbf{\Lambda} \mathbf{V}^\top$, where $\mathbf{V}$ is an orthogonal matrix and $\mathbf{\Lambda} = \mathrm{diag}(\lambda_1,\dots,\lambda_d)$ is a diagonal matrix. Then the solution is given by $\mathbf{V}\mathbf{\Lambda}' \mathbf{V}^\top$, where $$\lambda_k' = \begin{cases} 0, & \text{if } \lambda_k \geq 0; \\\\ L_1, & \text{otherwise.} \end{cases}$$ Hence, implementing the LMO would require a full matrix eigendecomposition, which requires $O(d^3)$ arithmetic operations in general. On the other hand, as we discuss in this paper, the (approximate) separation oracle of $\mathcal{Z}$ is more efficient to implement, since it only requires computing the two extreme eigenvectors and eigenvalues of the given matrix. To the best of our knowledge, only the two recent papers [3,4] consider no-regret algorithms based on the separation oracle. In this paper, we develop our online learning algorithm based on [4] for two main reasons: (1) the algorithm in [4] appears to be simpler as it only requires one call to the separation oracle per iteration; (2) it is relatively straightforward to allow inexactness of the separation oracle. That said, we think it might also be possible to adapt the algorithm in [3] and achieve similar convergence guarantees. --- [1] A. Rodomanov and Y. Nesterov. Greedy quasi-Newton methods with explicit superlinear convergence, 2021. [2] E. Hazan and S. Kale. Projection-free online learning, 2012. [3] D. Garber and B. Kretzu. New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret Guarantees, 2022. [4] Z. Mhammedi. Efficient projection-free online convex optimization with membership oracle, 2022. --- Rebuttal Comment 1.1: Title: Thank you Comment: I have read the rebuttal, and I maintain my score.
Summary: This paper proposed a novel quasi-Newton method with faster global convergence rate. The algorithm uses the framework of MS acceleration and updates the Hessian estimator via online learning. The obtained convergence rate is impressive, it firstly show a faster global rate of quasi-Newton method which cannot be achieved by the first order methods. Strengths: The proposed algorithm is novel. The global convergence rate of $\tilde{\mathcal{O}}(\sqrt{d}/k^{-2.5})$ is a significant theoretical result. Weaknesses: 1. Although the iteration complexity is impressive and significant, the total computation complexity of the proposed method is not very satisfactory. Compared with the first-order method, it requires additional Hessian-vector products with the complexity of $\min\lbrace d^{0.25}/\epsilon^{0.5},1/\epsilon^{0.625}\rbrace$ which may lead to an even higher computation cost than NAG. 2. The proposed method is complicated and may not practical to use. For the experimental part, the authors do not present the comparison between their method and the baselines in term of running time. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you compare the detailed computation cost in a table to make the results more clear? 2. Can you conduct some experimental results in terms of the running time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comment. We address your concerns below. --- **Q1 Compare the detailed computation cost in a table.** **A1** Thanks for your suggestion. In the following table, we summarize the detailed computation cost of NAG and our method A-QNPE, and we will also include it in our revision. In addition, we would like to clarify that our method requires extra matrix-vector products, rather than Hessian-vector products. Indeed, as a quasi-Newton method, we only need access to the gradient oracle. | | Gradient queries | Matrix-vector products ------- | ------------- | ------------- NAG | $O(\frac{1}{\epsilon^{0.5}})$ | N.A. A-QNPE | $\tilde{O}(\min\\{\frac{1}{{\epsilon}^{0.5}},\frac{d^{0.2}}{\epsilon^{0.4}}\\})$ | $\tilde{O}(\min\\{\frac{d^{0.25}}{\epsilon^{0.5}}, \frac{1}{\epsilon^{0.625}}\\})$ We observe that A-QNPE outperforms NAG in terms of gradient query complexity: it makes fewer or equal gradient queries, especially when $\epsilon < \frac{1}{d^2}$. On the other hand, A-QNPE requires additional matrix-vector product computations to implement the LinearSolver and SEP oracles. To assess the overall computation cost, we have to consider the cost of gradient computation, which varies depending on the specific problems. As a concrete example, consider the finite-sum minimization problem $f(x) = \frac{1}{n} \sum_{i=1}^nf_i(x)$. In this case, one gradient query typically costs $O(nd)$, while one matrix-vector product costs $O(d^2)$. Thus, the total computation cost of NAG and A-QNPE can be bounded by $O(\frac{nd}{\epsilon^{0.5}})$ and $ O(\frac{n d^{1.2}}{\epsilon^{0.4}}+ \frac{d^{2.25}}{\epsilon^{0.5}})$, respectively. In particular, our method will incur a lower computation cost when $\epsilon \ll \frac{1}{d^2}$ and $n \gg d^{1.25}$. As a final remark, we acknowledge that our method may be faster or slower than NAG, depending on the specific problem. Nevertheless, we would like to highlight that this is the first work to **theoretically demonstrate that quasi-Newton-type methods can outperform NAG in certain regimes**. Indeed, previous works [1,2] on quasi-Newton methods provide a convergence rate matching NAG, and as a result their overall computational cost will always be larger than NAG in theory. Thus, we believe our paper is an important conceptual advance for quasi-Newton methods and we leave the task of further reducing the computation cost as future work. --- **Q2 Conduct some experimental results in terms of the running time?** **A2** Thanks for your suggestion. We have included additional plots in terms of the running time; please check Figs. 1 and 2 in the attached pdf. All experiments are conducted using MATLAB R2021b on a MacBook Pro with an Apple M1 chip and 16GB RAM. Specifically, in Fig. 1 of the attached pdf file, we consider the logistic regression problem $f(x)= \frac{1}{n}\sum_{j=1}^n \log(1+e^{-y_j \langle a_j, x\rangle})$ as described in the paper, where the dimension is $d = 150$ and the number of samples is $n=2000$. As shown in Fig. 1(c), if we are seeking a solution of high accuracy, our method can require less running time than NAG due to its faster convergence. In addition, in Fig. 2 of the attached pdf file, we consider the log-sum-exp function following the suggestion of Reviewer N8Fk. The loss function is given by $f(x) = \log( \sum_{j=1}^n e^{\langle a_j, x\rangle - b_j}) $, where the dimension is $d=150$, the number of samples is $n=150$, and we follow the procedure in [1] to generate $\\{a_j\\}$ and $\\{b_j\\}$. In this case, Fig. 2(c) shows that the run-time performance of A-QNPE is comparable to that of NAG. Finally, we would like to emphasize that the primary objective of our numerical experiments is to validate our theoretical discovery that A-QNPE can attain a faster convergence rate than NAG. With a more meticulous implementation, there is potential to enhance the practical efficacy of our method, and we have deferred this as a prospect for future investigation. [1] A. Rodomanov and Y. Nesterov. Greedy quasi-Newton methods with explicit superlinear convergence, 2021
Summary: This paper proposes an accelerated quasi-Newton proximal extragradient method for solving unconstrained smooth convex optimization problems. The algorithm can achieve a convergence rate of $\mathcal{O}\bigl(\min\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\}\bigr)$, where $d$ is the problem dimension and $k$ is the number of iterations. In particular, in the regime where $k = \mathcal{O}(d)$, our method matches the optimal rate of $\mathcal{O}(\frac{1}{k^2})$ by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where $k = \Omega(d \log d)$, it outperforms NAG and converges at a faster rate of $\mathcal{O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)$. Strengths: This paper proposes an accelerated quasi-Newton proximal extragradient method for solving unconstrained smooth convex optimization problems. The algorithm can achieve a convergence rate of $\mathcal{O}\bigl(\min\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\}\bigr)$, where $d$ is the problem dimension and $k$ is the number of iterations. In particular, in the regime where $k = \mathcal{O}(d)$, our method matches the optimal rate of $\mathcal{O}(\frac{1}{k^2})$ by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where $k = \Omega(d \log d)$, it outperforms NAG and converges at a faster rate of $\mathcal{O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)$. Weaknesses: I have a concern about the total computation cost. Let us consider the finite-sum form, that is, $f(x) = \frac{1}{n} \sum_{i=1}^nf_i(x)$ with $n = \mathcal{O}(d)$. Then, the algorithm takes $N_\eps$ gradient queries which implies $\mathcal{O}(N_\eps d^2)$ computation cost. By Theorem 2 (c), the cost of computing SEP oracles, the computation cost is $ \mathcal{O}(N_\eps^{1.25} d^2) $. In this case, A-QNPE does not have advantages over Nesterov's accelerated gradient (NAG). Technical Quality: 3 good Clarity: 3 good Questions for Authors: No Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comment. We address your concern below. --- **Q1 Comparison with NAG in terms of the total computational cost.** **A1** Thanks for raising this point. For easier comparison, we follow the suggestion by Reviewer H1ti and summarize the computation cost of NAG and our proposed method A-QNPE to achieve an $\epsilon$ accuracy in the following table. | | Gradient queries | Matrix-vector products ------- | ------------- | ------------- NAG | $O(\frac{1}{\epsilon^{0.5}})$ | N.A. A-QNPE | $\tilde{O}(\min\\{\frac{1}{{\epsilon}^{0.5}},\frac{d^{0.2}}{\epsilon^{0.4}}\\})$ | $\tilde{O}(\min\\{\frac{d^{0.25}}{\epsilon^{0.5}}, \frac{1}{\epsilon^{0.625}}\\})$ We observe that A-QNPE outperforms NAG in terms of gradient query complexity: it makes fewer or equal gradient queries, especially when $\epsilon < \frac{1}{d^2}$. On the other hand, A-QNPE requires additional matrix-vector product computations to implement the LinearSolver and SEP oracles. To assess the overall computation cost, we have to consider the cost of gradient computation, which varies depending on the specific problems. As a concrete example, consider the finite-sum minimization problem $f(x) = \frac{1}{n} \sum_{i=1}^nf_i(x)$. In this case, one gradient query typically costs $O(nd)$, while one matrix-vector product costs $O(d^2)$. Thus, the total computation cost of NAG and A-QNPE can be bounded by $O(\frac{nd}{\epsilon^{0.5}})$ and $ O(\frac{n d^{1.2}}{\epsilon^{0.4}}+ \frac{d^{2.25}}{\epsilon^{0.5}})$, respectively. In particular, our method will incur a lower computation cost when $\epsilon \ll \frac{1}{d^2}$ and $n \gg d^{1.25}$. As a final remark, we acknowledge that our method may be faster or slower than NAG, depending on the specific problem. Nevertheless, we would like to highlight that this is the first work to **theoretically demonstrate that quasi-Newton-type methods can outperform NAG in certain regimes**. Indeed, previous works [1,2] on quasi-Newton methods provide a convergence rate matching NAG, and as a result their overall computational cost will always be larger than NAG in theory. Thus, we believe our paper is an important conceptual advance for quasi-Newton methods and we leave the task of further reducing the computation cost as future work. --- References: [1] K. Scheinberg and X. Tang. Practical inexact proximal quasi-Newton method with global complexity analysis, 2016 [2] H. Ghanbari and K. Scheinberg. Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, 2018. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal.
Summary: This paper uses the optimal and adaptive Monteiro Svaiter acceleration framework to create a quasi-Newton method that solves unconstrainted convex problems with Lipschitz gradients and Lipschitz hessians at Nesterov's accelerated rate $O(1/k^2)$ but when the number of iterations is greater enough than the dimension, namely $\Omega(d\log d)$ it provides better convergence guarantees in terms of gradient oracle complexity. The number of operations is superlinear $\widetilde{O}(N_\epsilon^{1.25})$ in the number of gradient queries $N_\epsilon$ and memory that is quadratic in the dimension is used. Strengths: Authors use a great range of technical and powerful tools like optimal and adaptive MS acceleration, they use projection free online learning with a separation oracle, the conjugate gradients method and Lanczos algorithms. The results are novel and interesting. Weaknesses: The abstract (and some parts of the paper) says that the method matches the optimal rate O(1/k^2) by NAG, and this rate is known optimal for functions that are convex smooth. Here your setting is convex smooth + Lipchitz Hessians. It would be good to explicitly comment on how that construction has a hessian Lipschitzness L_2 = 0 (since it is a quadratic) and therefore it also applies to your setting. I guess you can modify your algorithm to get results under the additional assumption of \mu strong convexity. This could take a lot of work, but on the other hand you can follow the spirit of the usual reductions to show rates for your algorithm under strong convexity by using a sequence of restarts. So I would suggest to add a remark with this. The argument would be like this, you run the algorithm in stages and after each stage you guarantee you halve the distance to x^ast and so you only need to run a logarithmic number of stages. In order to do that, you want to run the algorithm for k iterations such that you guarantee the last equality here: $$ \mu/2 \|x_t-x^\ast\|^2 \leq f(x_t) - f(x^\ast) \leq O(\text{your bound}) = \mu/8 \|x_0-x^\ast\|^2 $$ The first inequality is strong convexity and the second one is your guarantee. In the regime in which \mu is small enough, the number of iterations necessary in each stage should be \Omega(d\log(d)) and so your improved bound kicks in. After doing this, it would be desirable to have a comparison of these results with the quasi-Newton results mentioned in the introduction that apply to strongly convex functions. Similarly, regarding "However, all of the results above only apply under the restrictive assumption that the objective function f is strictly or strongly convex. In the more general setting where f is merely convex...". Given the reduction that solves convex smooth problems by regularizing with + \epsilon/ R^2 \norm{x_0-x^\ast}^2, where R is an upper bound on the initial distance to the minimizer, you should discuss what one can get with previous methods under the reduction. That is, the strongly convex setting is not necessarily restrictive and a comparison / discussion should be done. L839 "subourtine" -> "subroutine" Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: . Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We address your concerns below. --- **Q1 In your setting the function is convex smooth + Lipchitz Hessians. It would be good to comment on how the worst-case construction also applies to your setting.** **A1** This is an excellent point. The lower bound of $\Omega(1/k^2)$ for the class of convex smooth functions is established by a worst-case quadratic function, whose Hessian is constant (Lipschitz continuous with $L_2 = 0$). Therefore, the additional assumption of Lipschitz Hessian does not eliminate this worst-case construction from the considered problem class, and thus the $\Omega(1/k^2)$ lower bound also applies to our setting. Thanks for raising this excellent point and we will add it to the revision. --- **Q2 Use restart to extend the results to the strongly convex setting.** **A2** This is a very good suggestion. Indeed, it is possible to use the restarting technique to further extend our result to the strongly convex setting, as suggested by the reviewer. However, it appears that such a reduction would only lead to a linear rate with a better dependence on the condition number, instead of a superlinear rate that we would expect from a quasi-Newton method in the strongly convex setting. To be more precise, if we follow the arguments of the reviewer and run our algorithm in multiple stages, where we guarantee that the distance to $x^\*$ is halved after each stage, we obtain from Theorem 1 that $$f(x_t) - f(x^\*) = O\left( \min\left\\{\frac{L_1\\|x_0-x^\*\\|^2}{t^2}, \frac{L_1\sqrt{d}\\|x_0-x^\*\\|^2}{t^{2.5}}\right\\} \right).$$ To get this bound, we upper bound $\\|B_0 - \nabla^2 f(z_0)\\|_F^2$ by $L_1^2 d$, ignore the log factor in (19) and only focus on the dominant term for simplicity. As the reviewer points out, by using strong convexity we have $\\|x_t-x^\*\\| \leq \frac{1}{2}\\|x_0-x\^\*\\|$ if $f(x_t)-f(x^\*) \leq \frac{\mu}{8}\\|x_0-x^\*\\|^2$. Hence, the number of iterations required in each stage can be bounded by $O(\min\\{({\frac{L_1}{\mu}})^{0.5}, d^{0.2}(\frac{L_1}{\mu})^{0.4}\\})$, which implies a total complexity of $$ O\left(\min\left\\{\left(\frac{L_1}{\mu}\right)^{0.5}, d^{0.2}\left(\frac{L_1}{\mu}\right)^{0.4}\right\\} \log \frac{1}{\epsilon}\right).$$ In the regime where $d \leq \sqrt{\frac{L_1}{\mu}}$, the obtained complexity bound outperforms NAG in terms of the dependence on the condition number. On the other hand, we note that several papers [1,2] have established a local non-asymptotic superlinear rate of the form $O((1/\sqrt{k})^k)$ for classical quasi-Newton methods and their variants. More recently, a global non-asymptotic superlinear rate is also shown for a quasi-Newton proximal extragradient method [3]. In comparison, the restarting scheme described above can only achieve global linear convergence. While the main focus of this paper is on demonstrating a provable gain for a quasi-Newton-type method over NAG in the convex setting, the above argument for extending our algorithm to the strongly convex setting is an interesting observation and we will add it as a remark to our revised paper. --- **Q3 Discuss what one can get with previous methods under the regularization reduction.** **A3** Thanks for your insightful comment. As the reviewer rightly pointed out, one can regularize $f$ with $\frac{\epsilon}{R^2}\\|x-x_0\\|^2$ to reduce a convex smooth problem into a strongly-convex one with $\mu = \frac{\epsilon}{R^2}$. However, to the best of our knowledge, applying this reduction directly to the existing analysis of quasi-Newton methods would not lead to a global complexity bound better than the one for NAG, as we elaborate next. - The results in [1,2] are crucially based on local analysis and require the initial point $x_0$ to be close enough to the optimal solution $x^*$. However, it is unclear how to explicitly bound the number of iterations before the iterate enters the local neighborhood, and even if this can be done, it seems unlikely that the total complexity would be better than NAG, as these results only provide a local convergence analysis. - The result from [3] seems to be the only strongly-convex result that can be compared with NAG in the convex setting using the mentioned regularization idea, as it provides a global convergence analysis with an explicit overall complexity bound. Based on the discussions in Appendix D.2 of [3], the authors showed a global complexity bound in the form of $O\left(\min\left\\{\frac{L_1}{\mu} \log \frac{1}{\epsilon},d^{\frac{1}{3}}\left(\frac{L_1}{\mu}\log\frac{1}{\epsilon}\right)^{\frac{2}{3}} \right\\}\right)$ for strongly-convex objectives. Since we have $\mu = \frac{\epsilon}{2D^2}$ under the reduction, this translates into a complexity bound of $\tilde{O}\left(\min\left\\{ \frac{1}{\epsilon}, d^{\frac{1}{3}}\left(\frac{1}{\epsilon}\right)^{\frac{2}{3}}\right\\}\right)$ for convex problems, which is worse than the bound $O(\left(\frac{1}{\epsilon}\right)^{\frac{1}{2}})$ by NAG. This is conceivable since there is no form of acceleration in the proposed method of [3]. Thus, we conclude that simply applying the standard reduction to the existing analysis would not result in a complexity bound better than NAG. This highlights the need for a distinct algorithm and analysis specifically tailored to the convex setting, as presented in this paper. We will add a remark to the revised paper regarding this point. --- **Typo.** Thanks for catching the typo. We will fix this in the revision. --- References: [1] Q. Jin and A. Mokhtari. Non-asymptotic superlinear convergence of standard quasi-Newton methods, 2022. [2] A. Rodomanov and Y. Nesterov. New results on superlinear convergence of classical quasi-Newton methods, 2021. [3] R. Jiang, Q. Jin, and A. Mokhtari. Online learning guided curvature approximation: A quasi-Newton method with global non-asymptotic superlinear convergence, 2023. --- Rebuttal Comment 1.1: Title: reply Comment: I have read the rebuttal. Please add those three points to the paper. Good work.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort in evaluating our paper. Following the suggestions by **Reviewer H1ti** and **Reviewer N8Fk**, we have included additional plots in the attached pdf file. - In Fig. 1, we consider the logistic regression problem $f(x)= \frac{1}{n}\sum_{j=1}^n \log(1+e^{-y_j \langle a_j, x\rangle})$ as described in the paper, where the dimension is $d = 150$ and the number of samples is $n=2000$. As shown in Fig. 1(c), if we are seeking a solution of high accuracy, our method can require less running time than NAG due to its faster convergence. - In Fig. 2, we consider the log-sum-exp function following the suggestion of **Reviewer N8Fk**. The loss function is given by $f(x) = \log( \sum_{j=1}^n e^{\langle a_j, x\rangle - b_j}) $, where the dimension is $d=150$, the number of samples is $n=150$, and we follow the procedure in [1] to generate $\\{a_j\\}$ and $\\{b_j\\}$. In this case, Fig. 2(c) shows that the run-time performance of A-QNPE is comparable to that of NAG. - In Fig. 3, we plot both the suboptimality gap $f(x_k) - f(x^*)$ and the number of iterations on a log scale for the log-sum-exp experiment. For this specific problem, we can observe empirically that NAG converges at a sublinear rate of $O(1/k^{3})$, while A-QNPE converges at a faster rate of $O(1/k^5)$. --- [1] A. Rodomanov and Y. Nesterov. Greedy quasi-Newton methods with explicit superlinear convergence, 2021 Pdf: /pdf/897e9210eda7844c4e19814ff5b109673f7b848d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance
Accept (poster)
Summary: This paper proposes a novel gradient-free (zeroth-order) clipped version of stochastic similar triangles method for solving non-smooth stochastic convex optimization problem under a much weaker infinite variance assumption. The derived iteration and oracle complexity bounds are optimal in both convex and strongly convex case. Strengths: The problem addressed in this paper is well-motivated, the write-up is easy to follow, and the novelty and contribution are easy to identify. Weaknesses: 1. While the paper is a valuable theoretical contribution, the addition of experimental results would enhance the overall work by demonstrating the feasibility and effectiveness of this method in practical applications. 2. The absence of a Conclusion section or any definitive end to the work makes it feel like a work in progress. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Line 118: Should the upper bound of the second inequality be a notation $\sigma_B^{\alpha}$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**While the paper is a valuable theoretical contribution, the addition of experimental results would enhance the overall work by demonstrating the feasibility and effectiveness of this method in practical applications.** Thank you for the suggestion. Please, see our general response to all reviewers where we provided the results of numerical experiments and also describe them. >**The absence of a Conclusion section or any definitive end to the work makes it feel like a work in progress.** Thank you for the suggestion. We will add a conclusion section that will summarize our work, briefly list our contributions and reveal possible directions for further research. >**Line 118: Should the upper bound of the second inequality be a notation $\sigma_B^\alpha$** Thank you for the question. We do not need to introduce $\sigma_B$ since its value is explicitly written in the third formula in Lemma 3. We will remove $\sigma_B$ in the final version. --- Rebuttal 2: Comment: I appreciate the authors' response, which adequately resolved my concerns. Due to the addition of the experimental and conclusion chapters, I will raise my score to 6. --- Rebuttal Comment 2.1: Title: Thank you for the response Comment: We are glad that the reviewer's concerns were resolved and are grateful for raising the score.
Summary: In this papers, the authors build upon the work that has been done in [28] and adjust the proposed algorithms there for zero-order oracles rather than the gradient oracles. The goal is to optimize non-smooth stochastic convex optimization problems with infinite variance. Strengths: The paper does a good job when it comes to introducing the notions they have used with clarity. The organization of the paper helps the reader to understand the concepts. The quality of the write up and the technical contributions look solid. Weaknesses: - Minor typo in the abstract: ajust --> adjust. - "We emphasis (--> emphasize) that this generalization requires an extension of the batching technique to (--> for) infinite variance." The emphasis should be emphasize in this sentence and I believe for is more suitable than to. - The motivation about why we should be interested in such problems is only explained in citation numbers ([30, 6]) with no mention of what these motivating examples actually are. You have 9 pages of space without the references and you are not utilizing all of it. I suggest using the remaining space to attract more attention to your paper by mentioning concrete examples. This would help with the visibility of your paper as well. - "• can be generalized for saddle-point problems (based on [28]) and one-point feedback [13]. We leave it for future work." This should not been mentioned in the contributions since this paper has not actually made this contributions yet. You can add a conclusion and/or future work paragraph or section in the end mention it there. - Similarly, related work could have a section of its own. - My main concern about this paper is that, even though adjusting the method from [28] for two-point zero-order oracle is nontrivial, it feels a bit too incremental for a venue like NeurIPS. Also, the trade-off between using weaker or stronger assumptions can be discussed further in order to convince the reader on the advantages and the disadvantages of the zeroth-order methods and where to use which. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In which cases we should utilize the proposed accelaerated zeroth-order method for non-smooth stochastic convex optimization problems with infinite variance? - It is not very clear to me why the paper considers the case when \delta = 0 on page 6. Could you please elaborate more on that? - Could you please list some instances where non-smooth stochastic convex optimization problems show up and why we should be interested in them? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The advantages and the disadvantages of the proposed method is not discussed in detail except for the advantage of enabling weaker assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a detailed review of our work. Below, we address questions and concerns raised by the reviewer. >**Minor typo in the abstract: ajust --> adjust.** >**"We emphasis (--> emphasize) that this generalization requires an extension of the batching technique to (--> for) infinite variance." The emphasis should be emphasize in this sentence and I believe for is more suitable than to.** Thank you, we will fix these typos. > **Motivating examples.** Thank you for the suggestion. See our general response where we provide motivating examples. We will definitely add them to the final version. >**"• can be generalized for saddle-point problems (based on [28]) and one-point feedback [13]. We leave it for future work." This should not been mentioned in the contributions since this paper has not actually made this contributions yet. You can add a conclusion and/or future work paragraph or section in the end mention it there.** You are right, thank you for the comment. We will move this part to the conclusion section. >**Similarly, related work could have a section of its own.** We agree with the reviewer and promise to fix the final version of the paper accordingly. >**My main concern about this paper is that, even though adjusting the method from [28] for two-point zero-order oracle is nontrivial, it feels a bit too incremental for a venue like NeurIPS. Also, the trade-off between using weaker or stronger assumptions can be discussed further in order to convince the reader on the advantages and the disadvantages of the zeroth-order methods and where to use which.** Thank you for this suggestion. We should definitely more thoroughly explain the need for zero-order methods in the introduction. Zero-order methods should be used when one has black-box access to an objective function, e.g., the objective function is computed by a black-box simulation package or it is the result of a real experiment. Thus, automatic differentiation is impossible. This is often the case of various problems encountered in medicine, biologics, physics, and etc. The disadvantage of zero-order methods is the dependence of iteration and oracle complexity on the problem dimension $d$. The main advantage of zero-order methods is the possibility to solve an optimization problem when it is impossible to apply first-order methods (as we cannot calculate derivatives). However, all existing zero-order methods are not robust to heavy-tailed noise, which is why we propose a zero-order algorithm that is able to cope with this issue. Our main result indeed relies on the techniques from Sadiev et al. (2023), but we would like to highlight that the adjustment of clipped-SSTM to the derivative-free setup is non-trivial. To do it we needed to generalize the smoothing technique to the case of bounded $\alpha$-th moment, e.g., see Lemma 11. Next, to achieve optimal oracle and iteration complexities, we also needed to generalize the classical result about batching from bounded variance to bounded $\alpha$-th moment case. This results in Lemma 9 that is interesting on its own. To the best of our knowledge, the previous result of this type is Lemma 7 from [1] that has an extra factor of $d^{1-\frac{\alpha}{2}}$, where $d$ is the dimension of the problem. For huge-scale problems, this factor can be large even for $\alpha \approx 3/2$. In contrast, our Lemma 9 is dimension-independent. [1] Wang et al. Convergence rates of stochastic gradient descent under infinite noise variance. NeurIPS 2021. >**In which cases we should utilize the proposed accelaerated zeroth-order method for non-smooth stochastic convex optimization problems with infinite variance?** We refer to our general comment to the reviewers (see the section with motivating examples). >**It is not very clear to me why the paper considers the case when \delta = 0 on page 6. Could you please elaborate more on that?** We thank the reviewer for spotting this typo. We will fix it in the final version. We focus mostly on the case of $\Delta = 0$ to make the proofs simpler and more readable. The proofs for the case of $\Delta > 0$ follow similar steps and require just accurate calculations. In particular, the additive noise creates extra non-stochastic bias terms in the sums like (12). Since the noise $\delta(x)$ is bounded, these bias terms can be upper-bounded by induction using the same technique as in the proof for the case $\Delta = 0$ (the idea is described in lines 232-244). We will add these details to the final version. >**Could you please list some instances where non-smooth stochastic convex optimization problems show up and why we should be interested in them?** Stochastic non-smooth convex optimization problems arise in machine learning (population risk minimization) and statistical application (e.g., likelihood maximization). We can list SVM or ReLU activation functions in deep learning problems. See also our general response for motivating examples. --- Rebuttal Comment 1.1: Comment: I thank the authors for carefully responding to my concerns. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We thank the reviewer for checking our response and for the positive rating.
Summary: This paper proposed and analyzed a zeroth-order method for non-smooth stochastic optimization under heavy-tailed noise and adversarial noise, by combining ball-averaging-based smoothing technique (to tackle non-smoothness) and gradient clipping technique (to tackle heavy-tailed/adversarial noise). This generalizes previous results where $L_\infty$ or $L_2$ boundness of noise is assumed. There are also several technical improvements over previous results, including proving a high-probability bound instead of a bound in expectation. Strengths: 1. The topic is important. Gradient free method for stochastic optimization is a popular field of research, and it certainly helps to handle the heavy-tailed/adversarial noisy setting which is common in practice. 2. I have briefly gone through the proof and have no doubt on its correctness. 3. Though not proved in the paper, the theoretical bound seems close to being tight. Weaknesses: 1. The presentation is concise, but maybe at the cost of some necessary clarity. In Eqn. (2) it's not specified what distribution should $\bf e$ follow (should be the uniform distribution in the unit _ball_), which leaves the entire equation undefined. It becomes even more confusing that later in (3) $\bf e$ is used with a different meaning, denoting a random vector uniformly distributed on the unit _sphere_! 2. It may also improve clarity to discuss the order of the theoretical bounds and compare them with previous results. In particular, it would be helpful to discuss whether the dependence of $\alpha$ is optimal. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: I have no additional questions other than the ones discussed in `Weaknesses`. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a positive evaluation of our work. Below, we address questions and concerns raised by the reviewer. >**The presentation is concise, but maybe at the cost of some necessary clarity. In Eqn. (2) it's not specified what distribution should $e$ follow (should be the uniform distribution in the unit ball), which leaves the entire equation undefined. It becomes even more confusing that later in (3) $e$ is used with a different meaning, denoting a random vector uniformly distributed on the unit sphere!** We thank the reviewer for spotting this mismatch in our notation. Eqn. (2) is the only place where vector $e$ should be uniformly distributed on the unit **ball**. Everywhere else $e$ is a random vector uniformly distributed on the unit **sphere**. To avoid this confusion, we propose the following change that we will apply in the final version: in Eqn. (2) we will use vector $u$ to denote a random vector uniformly distributed on the unit ball. Then, everywhere in the paper, vector $e$ will denote a random vector uniformly distributed on the unit sphere. >**It may also improve clarity to discuss the order of the theoretical bounds and compare them with previous results. In particular, it would be helpful to discuss whether the dependence of $\alpha$ is optimal.** Our work is the first work on gradient-free optimization with heavy-tailed noise. Optimality of oracle complexity in terms of the dependence on $d$ is an open problem in non-smooth settings (if $\alpha$=2, i.e., when noise has bounded variance, it is optimal). However, it is not optimal for **smooth** stochastic convex optimization problems with $(d+1)$-points stochastic zero-order oracle. Iteration complexity and maximal level of noise are optimal (coincides with lower bounds in one of the regimes, see [1, 2]). We will add these remarks to the final version of the paper. [1] Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1), 1-122. [2] A. Risteski and Y. Li. Algorithms and matching lower bounds for approximately-convex optimization. Advances in Neural Information Processing Systems, 29:4745–4753, 2016. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and will keep my score unchanged. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We thank the reviewer for checking our response and for the very positive evaluation.
Summary: This paper presented derivative-free methods for the optimization of stochastic convex functions with a potentially infinite variance noise. Here the level of noise is defined in terms of the boundedness of modulus of a Hölder-type continuous condition. The main technique is to adopt a gradient clipping to the two-point estimation of the gradient of the randomized smoothed function. For some of their results, they also claim the attained bounds are rate optimal. The presentation and organization is of very clear and relatively easy to follow. Strengths: Overall the contribution is well-motivated, and fits into the flurry of recent development of methods for problem with infinite noise variance. The paper is well written, and the proofs are intuitive and relatively easy to follow. Weaknesses: I only have some minor comments and questions: * L113: it might be better to use {\xi_i}_i and {e_i}_i as the input of the function g^B. * L119: where is the \sigma_B used in the statement of Lemma 3? * the second line of Section 2: you might need to assume g \neq 0. * L126: where is the distribution D_k defined? * L139: could you elaborate on why the first term is optimal using the lower bound from [4]? Do you need some assumption on batch-size B here to illustrate rate optimality? * L152: it might be better to emphasize that w.p. 1-\beta, *for any* 1 \leq t \leq N, ..., if the boundedness of iteration holds uniformly. * Question (out of curiosity): is the bound in L190 optimal? Or, is there any lower bound for such a so-called maximum allowable noise level? * L224: proof -> prove. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a positive evaluation of our work. Below, we address questions and concerns raised by the reviewer. >**L113: it might be better to use {\xi_i}_i and {e_i}_i as the input of the function g^B.** Thank you for the suggestion, we agree with it. We did not want to complicate formulas with $g^B$ and thus used this notation in the paper. We can use bold symbols with superscript $B$ so that there was no confusion: $\mathbf{e}^B = \lbrace e_i\rbrace_i$ and $\boldsymbol{\xi}^B =\lbrace\xi_i\rbrace_i$. Then we were able to use $\mathbf{e}^B$ and as $\boldsymbol{\xi}^B$ the input of the function $g^B(x, \mathbf{e}^B, \boldsymbol{\xi}^B)$. We will change this in the final version of our paper. >**L119: where is the \sigma_B used in the statement of Lemma 3?** Thank you for the question. We do not need to introduce $\sigma_B$ since its value is explicitly written in the third formula in Lemma 3. We will remove $\sigma_B$ in the final version. >**the second line of Section 2: you might need to assume g \neq 0.** Thank you, we will add this and also add that $\text{clip}(0,\lambda) = 0$. >**L126: where is the distribution D_k defined?** Thank you, this is a typo. It must be $D$. This is a distribution of random variable $\xi$. We do not know $D$ but we can sample from it (see Eq. (1) in line 14). >**L139: could you elaborate on why the first term is optimal using the lower bound from [4]? Do you need some assumption on batch-size B here to illustrate rate optimality?** This optimality is in terms of $\varepsilon$. The first term is independent of batch size B. In **[4]** (citation is from our work), it was proven that this corresponds to the lower bound in one of the regimes in a noiseless setup. The additional presence of noise cannot improve iteration and oracle complexity, it can only make it worse (due to its possible adversarial nature). The number of iterations of our method that allows the presence of (possibly non-stochastic) noise corresponds to the lower bound for a noiseless setup, that is, the number of iterations is optimal. Also, we refer to [1] where the authors propose an optimal algorithm in terms of oracle and iteration complexity but in classical settings of noise with bounded variance. The iteration complexity of the method from [1] coincides with the iteration complexity of our method. Thus, our algorithm can be also seen as a robust version of the algorithm from [1] that makes it possible to work with heavy-tailed noise. **[4]** Sébastien Bubeck, Qijia Jiang, Yin-Tat Lee, Yuanzhi Li, and Aaron Sidford. Complexity of highly parallel non-smooth convex optimization. Advances in neural information processing systems, 32, 2019. [1] Gasnikov, A., Novitskii, A., Novitskii, V., Abdukhakimov, F., Kamzolov, D., Beznosikov, A., ... & Gu, B. (2022, June). The power of first-order smooth optimization for black-box non-smooth problems. In International Conference on Machine Learning (pp. 7241-7265). PMLR. >**L152: it might be better to emphasize that w.p. 1-\beta, for any 1 \leq t \leq N, ..., if the boundedness of iteration holds uniformly.** Thank you, we will do it. >**Question (out of curiosity): is the bound in L190 optimal? Or, is there any lower bound for such a so-called maximum allowable noise level?** The lower bound for $\Delta$ (maximal absolute value of noise) is given in [1]. Our bound exactly corresponds to that lower bound in the regime when $\varepsilon^{-2} \lesssim d $. That is the large-dimension regime, when the subgradient method is better than the center of gravity type methods [2]. [1] A. Risteski and Y. Li. Algorithms and matching lower bounds for approximately-convex optimization. Advances in Neural Information Processing Systems, 29:4745–4753, 2016. [2] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization. Wiley-Interscience, 1983. >**L224: proof -> prove.** Thank you, we will fix this typo. --- Rebuttal Comment 1.1: Comment: I appreciate the thorough responses, and I'll maintain my current score. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We thank the reviewer for checking our response and for the very positive evaluation.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and time. In particular, we appreciate that the reviewers acknowledged the following strengths of our work: a well-motivated problem, an algorithm with tight and valid theoretical bounds, an important contribution, and a good write-up and organization. The reviewers also have several questions and concerns that we address in our responses to each reviewer. In this general comment, we respond to the common reviewers’ comments and concerns. ## Motivation examples In machine learning, the interest in gradient-free methods is mainly driven by the bandit optimization problem [1,2]. The vast majority of authors assume sub-Gaussian distribution of rewards. However, in some practical cases (e.g., in finance [3]) rewards distribution has heavy tails or can be adversarial. For the heavy-tailed bandit optimization, we refer to [4]. Moreover, in many applications of medicine, biologics, physics, etc., the objective function is only computable through numerical simulation or the result of a real experiment, i.e., automatic differentiation cannot be employed to calculate function derivatives. Usually, a black-box function we are optimizing is affected by stochastic or computational noise. This noise can arise naturally from modeling randomness within a simulation or by computer discretization. The classical setting assumes this noise to have light tails. However, usually, in black-box optimization, we know nothing about the function, only its values at requested points are available/computable, so any assumptions about noise may not be fulfilled. And if so, gradient-free algorithms may diverge. We aim to construct an algorithm that is robust even to heavy-tailed noise that does not have finite variance. In theory, one can consider heavy-tailed noise to model a situation when noticeable outliers may occur in practice (even if the nature of these outliers is non-stochastic). That is why we relax classical assumptions about finite variance and consider less burdensome assumptions of finite $\alpha$-th moment. ## Numerical experiments Following the reviewers’ requests, we conducted numerical experiments with the proposed method. We consider the following convex non-smooth problem: $\min_{x\in \mathbb{R}^{16}}f(x)$, where $f(x) = \frac{1}{500}\|\| Ax - b \|\|_1$ for some matrix $A \in \mathbb{R}^{500 \times 16}$ and vector $b \in \mathbb{R}^{500}$. The stochastic noise is introduced as follows: $f(x,\xi) = f(x) + \langle \xi, x \rangle$, where $\xi$ is a random vector having independent components sampled from the symmetric Levy $\alpha$-stable distribution with $\alpha = 3/2$. This problem satisfies Assumption 1 with $\mu = 0$ and Assumption 2 with $\alpha = 3/2$ and $M_2(\xi) = \|\|A\|\|_1 + \|\| \xi \|\|_2$. We notice that $\mathbb{E}[M_2(\xi)^\alpha]$ is bounded while $\mathbb{E}[M_2(\xi)^2] = +\infty$ due to the choice of $\xi$. The existing SOTA zeroth-order methods do not use clipping. They are also obtained from the first-order methods using the smoothing technique and two-point feedback oracle. We call the methods obtained this way from SGD and SSTM as ZO-SGD and ZO-SSTM, respectively, and compare them with our method (ZO-clipped-SSTM). The results of the experiments are provided in the PDF attached to the response. As expected, the methods with clipping fail to converge due to the heavy tails in the distribution of the noise, while ZO-clipped-SSTM converges even in such a setup. This numerical experiment illustrates the necessity of clipping (or other non-linear transformations) in zeroth-order methods for handling heavy-tailed noise. If the reviewers have further questions/concerns/comments, we will be happy to participate in the discussion. --- References: [1] Flaxman, A. D., Kalai, A. T., & McMahan, H. B. (2004). Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007. [2] Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1), 1-122. [3] S.T Rachev: Handbook of Heavy Tailed Distributions in Finance: Handbooks in Finance, Book 1. Elsevier, North Holland (2003) [4] Dorn, Y., Nikita, K., Kutuzov, N., Nazin, A., Gorbunov, E., & Gasnikov, A. (2023). Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits. arXiv preprint arXiv:2305.06743. Pdf: /pdf/8adfb4a211672ce656efbb7ad5251601cbf082ce.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides high probability bounds for the convergence of gradient-free methods on convex and strongly-convex functions when the noise in the gradient oracle has infinite variance. An oracle provides $f(x,\xi)$, a noisy evaluation of the function $f$ at point $x\in \mathbb{R}^d$ by the oracle, where $\xi$ is the noise variable. For the same noise $\xi$, the function is $M_2(\xi)$-Lipschitz, where $M_2(\xi)$ quantifies the noise level. If the noise variance is finite, ($\mathbb{E}[M_2(\xi)^2] < \infty$), [1] provides optimal iteration and oracle complexity for convergence of in expectation. The primary technique used in [1] is a batched accelerated gradient method which uses smoothing. For a fixed constant $\tau>0$ which defines the smoothing level, smoothing computes an approximate gradient of $f$ at point $x\in \mathbb{R}^d$ as, $$ g^B(x) = \frac{d}{2B \tau }\sum_{i=1}^B (f(x + \tau e_i, \xi_i) - f(x - \tau e_i, \xi_i))e_i $$ Here, $e_i$ are sampled uniformly from the unit sphere, $\xi_i$ are the noise variables of the oracle and $B$ is the batch size. On expectation, the smoothed gradient is the gradient of $ \mathbb{E}_{e,\xi}[f(x + \tau e, \xi)] $. For small value of $\tau$, this approximation is close to $f$. Further, even if the function $f$ is non-smooth but Lipschitz, smoothing makes $\frac{\sqrt{d}M_2}{\tau}$ smooth. This paper extends this technique to the infinite noise variance setting, $\mathbb{E}[M_2(\xi)^\alpha] < M_2^\alpha$ for some $\alpha \in (1,2]$ by applying clipping. Specifically, the technique of clipped Stochastic Similar Triangles used for handling heavy tailed noise in smooth optimization is extended to the non-smooth case by the above smoothing procedure. For convex Lipschitz functions, the iteration complexity and oracle complexity is $\frac{\sqrt{d}^{1/4}}{\epsilon}$ and $\left(\frac{\sqrt{d}}{\epsilon}\right)^\frac{\alpha}{\alpha-1}$. For $\mu$-strongly convex and lipshcitz functions, the corresponding bounds are $\frac{d^{1/4}}{(\mu\epsilon)^{-1/2}}$ and $\left(\frac{d}{\mu\epsilon}\right)^{\frac{\alpha}{\alpha-1}}$. These rates are shown to be optimal in $\epsilon$. Further, when if the oracle provides a corrupted value of $f$ with an additive corruption of $\lvert\delta(x)\rvert \leq \Delta$, the authors derive the maximum possible values of $\Delta$ such that the convergence rates for both smooth and non-smooth settings are unaffected by the corruption. **References** 1. Gasnikov et al. The power of first-order smooth optimization for black-box non-smooth problems. ICML 2022. 2. Sadiev et al. High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance. ICML 2023. Strengths: - **Interesting Problem Setting**: Heavy-tailed noise is a significant problem which violates the commonly used bounded variance assumption in stochastic optimization. The authors extend the solution of clipping to handle it for derivative-free methods. - **Convergence Rates** : These are the first convergence rates for derivate-free optimization under heavy-tailed noise. Further, the rates are high probability bounds instead of in expectation. Additionally, for both cases, convex and strongly convex, the obtained rates are optimal in terms of error $\epsilon$. - **Thorough literature review** : The authors thoroughly review existing results in derivative-free methods and clipping. Weaknesses: - **Presentation** : The paper seems to be missing an introduction and experiment section. Although the paper is theoretical, the proposed algorithm, clipped-SSTM with two-point feedback, is new and should have been tested on at least synthetic problems. - **Lack of a motivating example** : The authors do not provide a motivating example which justifies the heavy-tailed noise in gradient-free settings. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - Are there any lower bounds for oracle and iteration complexity in the adversarial corruption case in terms of $\Delta$? - Are there methods other than clipping to handle unbounded variance in stochastic optimization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a detailed summary of our main contributions. Below, we address questions and concerns raised by the reviewer. > **Missing introduction and numerical experiments.** We agree that the introductory part can be improved and will extend it in the final version of our work. We will add more motivational examples about the significance of the problem we are solving, in particular, motivational examples that justifies the use of gradient-free methods as well as the need to consider heavy-tailed noise. Moreover, we understand your concern about the presentation of our work, but we promise that the final version will be much more comprehensive and precise. The numerical experiments are provided in the general response to all reviewers. > **Motivating examples.** Thank you for the suggestion. See our general response where we provide motivating examples. We will definitely add them to the final version. >**On the lower bounds.** The lower bound for the iteration complexity can be found in [1]. The lower bound for oracle complexity is presented in [1] and [2]. These lower bounds were obtained in a noiseless setup ($\Delta = 0$). The lower bound for $\Delta$ (maximal absolute value of noise) is given in [3]. The additional presence of noise cannot improve iteration and oracle complexity, it can only make it worse. The upper bounds of our method meet (up to the numerical and logarithmic factors) all of these three lower bounds (for iteration complexity and $\Delta$, this holds in the regime when $\varepsilon^{-2} \lesssim d$, this is the large-dimension regime, when the subgradient method is better than the center of gravity type methods [1]). Thus, our algorithm is optimal in terms of all of these three criteria. [1] Arkadii S. Nemirovsky and David B. Yudin. Problem complexity and method efficiency in optimization. Wiley-Interscience, 1983. [2] S. Bubeck, Q. Jiang, Y. T. Lee, Y. Li, A. Sidford, et al. Complexity of highly parallel non-smooth convex optimization. Advances in neural information processing systems, 2019. [3] A. Risteski and Y. Li. Algorithms and matching lower bounds for approximately-convex optimization. Advances in Neural Information Processing Systems, 29:4745–4753, 2016. >**Alternatives to clipping.** Heavy-tailed noise can also be handled without explicit gradient clipping. For example, one can use Stochastic Mirror Descent algorithm with a particular class of uniformly convex mirror maps [1]. This algorithm does not require any explicit gradient clipping or normalization. However, the convergence guarantee was given in expectation. Moreover, it is not clear how to apply batching and acceleration for this method. Without this, we would not be able to get the optimal method in terms of the number of iterations and not only in terms of oracle complexity. There are also some studies on the alternatives to gradient clipping [2] but the results for these alternatives are given in-expectation and are weaker than the state-of-the-art results for the methods with clipping. This is another reason why we choose gradient clipping to handle the heavy-tailed noise. [1] Vural, Nuri Mert, et al. "Mirror descent strikes again: Optimal stochastic convex optimization under infinite noise variance." Conference on Learning Theory. PMLR, 2022. [2] Jakovetić, Dus̆an, et al. "Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise." SIAM Journal on Optimization 33.2 (2023): 394-423. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for providing a detailed response to all of my questions. The motivating example and numerical experiments seem nice and the authors should include it in the final version of the draft. I'm increasing my score based on this. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We are very grateful to the reviewer for raising the score. We will definitely include motivating examples and numerical experiments in the final version.
null
null
null
null
null
null
Generalized Weighted Path Consistency for Mastering Atari Games
Accept (poster)
Summary: This paper proposed Generalized Weighted PCZero (GW-PCZero), which builds on EfficientZero and PCZero. The goal is to generalize the implementation of PCZero from board games to Atari games, which is achieved by adding the theorem “path consistency” to EfficientZero, extending the previous idea from PCZero. This paper further applies a weighting mechanism to path consistency, which calculates more accurate targets for agents to learn. Furthermore, this paper proves that neural-guided MCTS is guaranteed to find the optimal solution under PC constraint, providing a theoretical foundation for PC. The experiments show that under Atari 100k benchmark, GW-PCZero achieves 198% mean human normalized performance, slightly higher than the SOTA EfficientZero (194%). More importantly, GW-PCZero only consumes 25% computational resources of EfficientZero. Strengths: 1. GW-PCZero achieves slightly higher performance under Atari 100k benchmark than the SOTA EfficientZero while consuming only 25% computational resources of EfficientZero. 2. GW-PCZero generalizes the implementation of path consistency from board game to the case where the environment emits immediate reward, such as Atari game. 3. This paper proves that neural-guided MCTS is guaranteed to find the optimal solution under PC constraint. 4. The paper provides rich experimental data. In addition to the Atari 100k benchmark, the authors conducted experimental analyses on board games (Hex) by combining PCZero with the weighting mechanism. Furthermore, experiments were conducted on the Classic control problem - Cartpole. The diverse range of experimental environments shows the generality of this method. Weaknesses: It appears that the proposed method is primarily a combination of two previous works, PCZero and EfficientZero, without introducing significantly novel ideas or concepts in the algorithm itself. However, overall, I would still think it is valuable to obtain the state-of-the-art with the combination. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Appendix 7.1, regarding the board game Hex, you mentioned that "Weighted PCZero beats the original PCZero with a score of 175:163." However, based on the score of 175:163, it seems that the two are relatively close in performance, and it is not evident from this result alone the advantage of the Weighted mechanism. Probably, you want to show some confidence error. 2. In Appendix 7.2, the experiments for the Classic control problem only include Cartpole, which is a relatively simple environment. Are there any other experiments that demonstrate the applicability of GW-PCZero to more complex control problems? For example, EfficientZero applied their work to Deepmind Control 100k, and it would be beneficial if you could apply your method to Deepmind Control 100k and compare it with their results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive comments and suggestions, and we will carefully revise the paper accordingly. We provide a detailed response to each question below. Q1: In Appendix 7.1, regarding the board game Hex, you mentioned that "Weighted PCZero beats the original PCZero with a score of 175:163." However, based on the score of 175:163, it seems that the two are relatively close in performance, and it is not evident from this result alone the advantage of the weighting mechanism A1: The effectiveness of the weighting mechanism becomes more evident when the game trajectories are of low-quality. Here, we demonstrate the effect by training the model gradually on the self-generated games during the reinforcement learning process. In the early stages of training, the model's playing ability is relatively weak, and the quality of the generated game playing trajectories is poor. When trained on the 50k games, GW-PCZero beats PCZero with a score of 199:139. When trained on the first 250k games, GW-PCZero beats PCZero with a score of 216:122. It is noted that the performance gap between these two models becomes more evident, i.e., the advantage of the weighting mechanism becomes more obvious. Q2: In Appendix 7.2, the experiments for the Classic control problem only include Cartpole, which is a relatively simple environment. Are there any other experiments that demonstrate the applicability of GW-PCZero to more complex control problems? A2: Thank you for your valuable suggestion. We are working on applying our GW-PCZero to the DeepMind Control (DMC) 100k tasks. Currently, our GW-PCZero is built on EfficientZero, but the source code of EfficientZero for the DMC task is not available from the original authors. We have not yet reproduced the results of EfficientZero on the DMC tasks. In the future, we will make every effort to successfully perform our GW-PCZero on the DMC tasks or find another way to demonstrate that the idea of Path Consistency (PC) also works on the DMC tasks.
Summary: This paper proposes GW-PCZero, an RL algorithm based on neural-guided Monte Carlo Tree Search (MCTS). GW-PCZero adopts the idea of Path Consistency (PC) from prior work, i.e., a regularizer that encourages evaluation function to be consistent throughout an optimal path, to improve sample efficiency. Beyond this, GW-PCZero generalizes to the environment with reward given before the end of the episode, and appends a weight in the regularizer that decreases with the number of steps of the episode to account for increasing uncertainty later in the episode. The paper proves that the probability of finding the optimal path is lower-bounded, and achieves comparable / marginally better performance than the state-of-the-art (EfficientZero) with much less computational cost on many Atari environments. Strengths: **1. The writing is easy to follow and the idea is clearly conveyed.** There are many designs of the prior work that needs to be introduced, such as neural MCTS, path consistency, and re-analyzation of both MuZero and EfficientZero; moreoever, the motivations of PCZero and EfficientZero have to be clearly stated for the readers to understand the line of work. This paper does a good job in the first three section where the concepts are well-explained to the readers. The theorems and assumptions are also clearly stated in the methodology section. **2. The experiment results are solid and convincing.** The superior performance of GW-PCZero has been evaluated on Hex and many Atari environments, and the ablation on the most important hyperparameters, which are the PC weight factor and $\lambda$, are provided in detail. Furthermore, the authors have submitted the code, which makes the result more convincing. **3. The algorithm has theoretically proved performance, which is a valuable contribution.** The paper has proved that GW-PCZero has a bound for probability of finding the optimal path, which is the first theoretical result in this line of work. I am convinced that the proving technique utilized in the paper would be beneficial for further theoretical research into neural MCTS. Weaknesses: **1. It seems that the result is somewhat sensitive to the choice of weighting hyperparameters.** In table 3, the scores are quite different with adjacent choice of $c_a$, and different environment seems to have very different optimal $c_a$. This is also the case in the selection of $\lambda$ in table 1 of the appendix, and there is no monotonicity exhibited on either side of the optimal hyperparameter (e.g. 8*8 and 9*9 MCTS player). **2. Other minor problems:** a) The second part of Table 3 in the main paper should also have a horizontal line between the first row and the rest of the table. b) Should Eq. 16 be clipped with 0 from below? by line 281 we know that when $i>10$ the weight will be negative, but it does not make sense to discourage path consistency at any time. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have four questions for this paper: 1. The experiment results only show the effect of How sensitive the $\lambda$ is on the board games, which is a special case. How sensitive would $\lambda$ on the Atari environment be? 2. In section 4.4, the $\lambda$ is chosen according to the ratio between prior work performance and expert (human) performance. However, the evaluation of expert (human) performance might not be available in real-life applications. Have the author considered to use a adaptable $\lambda$ that dynamically changes during the training process? (I noticed that the authors mention "systematic investigation on automatic weighting methods in the future", but this is about a different hyperparameter.) 3. The author claims that they can do marginally better than EfficientZero with a quarter of MCTS runs. While this is exciting, a constant factor improvement in the time complexity might be cancelled out by implementation and more values to calculate (e.g. the extra regularizer term). Is it possible for the authors to provide actual wall clock time for each method, or to explain that your method do not have a significant overhead compared to EfficientZero for each MCTS run? 4. It is a little surprising that in the Appendix section 7.2, MuZero without Path Consistency (PC) cannot deal with environments as simple as cartpole, either with or without reanalyzation. Also, there are two interesting phenomenon about this figure, which are 1) for all curves, the rewards first increase quickly, then decrease to a relatively low level, and 2) with no reanalization, MuZERO without PC even seems to work better. Could the authors explain the figure more carefully? There are also three suggestions besides addressing the questions above and in the weakness section: 1. Limitation section are missing in the paper. While there are some places that implicitly mentions the limitation (e.g. line 327, "it deserves a systematic investigation on better automatic weighting ... in the future"), I suggest the author to think of more possible limitations (see limitation section) and summarize them in a separate paragraph. 2. I suggest the authors to append a pseudocode in the appendix to more clearly shows how GW-PCZero works; it is even better if the authors could highlight the difference in the pseudocode with prior work such as PCZero. 3. The authors claim that there is no immediate ethical or social impact of this work. While this is true, there are still broader impacts of the paper that still needs to be considered, such as potential misuse of the automated technology and potential job loss that needs to be considered. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors implicitly mentioned limitations (line 327, "it deserves a systematic investigation on better automatic weighting ... in the future"). However, I suggest the authors to think more about limitations, such as hyperparameter sensitivity, manual selection of hyperparameters, possible limitation on future application by the nature of neural-guided MCTS, etc., and summarize them into a separate paragraph. As for potential negative societal impact, the author claims that there are no immediate negative societal impact. While this is true, I encourage the authors to be aware of the broader impact of their work on automated decision making. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your valuable suggestions. We will incorporate more discussions about algorithm’s limitations in the paper. For example, similar to many existing algorithms, the performance of the current implementation of GW-PCZero relies on the selection of hyperparameters. In the long term, efficient reinforcement learning algorithms may be applied in many real-world scenarios, such as factory robots, which may lead to worker displacement. The complete pseudocode will be provided in the appendix. In the following, we provide a detailed explanation of the raised concerns. Q1: The experiment results only show the effect of How sensitive the λ is on the board games, which is a special case. How sensitive would $\lambda$ on the Atari environment be? A1: The conclusion drawsn from board games also hold on Atari games. We add an experiment of fixing $\lambda$ at 0.2 or 0.35, and the results are reported below. We observe that out of the 11 simple games, 9 achieved better results with smaller $\lambda$. This is consistent to the practical experience given in this paper, i.e., it is better to set a small $\lambda$ for the relatively simple games whose the action space is less than 18. |Game | $\lambda$=0.2 | $\lambda$=0.35 | Game | $\lambda$=0.2 | $\lambda$=0.35 | | :-: | :-: | :-: |:-: | :-: | :-: | |Breakout | 450.0 |384.1|Pong |19.8 |6.7| |Qbert |13651.6 |7439.1 | Assault | 1224.1 |1350.2| |UpNDown |12344.7 |3932.5 | Asterix | 14771.9 |9000.0| |CrazyClimber| 9734.4 |6665.6 | DemonAttack| 24074.1 |12116.6| |MsPacman |1594.1 |805.0| Amidar |97.0 | 160.1| |KungFuMaster| 20543.8 |10400.0| Q2: In section 4.4, the $\lambda$ is chosen according to the ratio between prior work performance and expert (human) performance. However, the evaluation of expert (human) performance might not be available in real-life applications. Has the author considered to use an adaptable $\lambda$ that dynamically changes during the training process? A2: Thanks for your valuable suggestions. Making the coefficient $\lambda$ dynamically adapted during the training process is under our consideration. We have been trying to reduce $\lambda$ gradually as the training proceeds. In the current version of the paper, we suggest to use a small $\lambda$ for the relatively simple games and a large $\lambda$ for the complex games according to our practical experience. We agree that it is challenging to assess the difficulty of each Atari game, and the ratio given in the paper is an approximate measure. In practical applications, other factors can also be adopted to evaluate the difficulty of the game, such as its state-space complexity, game-tree complexity, and so on. It deserves further investigations in the future. Q3: The author claims that they can do marginally better than EfficientZero with a quarter of MCTS runs. While this is exciting, a constant factor improvement in the time complexity might be cancelled out by implementation and more values to calculate (e.g. the extra regularize term). Is it possible for the authors to provide actual wall clock time for each method, or to explain that your method do not have a significant overhead compared to EfficientZero for each MCTS run? A3: The regularization term induced by Path Consistency (PC) for the loss function increases some extra computation in the training phase but not much. We report the wall clock times by taking the game of Breakout as an example due to the time limitation in the rebuttal period. GW-PCZero with 60k training steps spends 254 minutes on the training process. If we remove the PC term, the training time of GW-PCZero reduces to 230 minutes. That is, the extra training computation due to the PC term costs 24 minutes, which is a 10.4% (24/230) increment in the wall clock time. Moreover, since the PC term does not affect the MCTS runs, the computational cost of GW-PCZero for each MCTS run is the same as that of EfficientZero. Therefore, the extra computational time due to the PC term is relatively a very small proportion of the overall cost. We will report the actual wall clock time for all the methods on all Atari games in the revised version of the paper. Q4: It is a little surprising that in the Appendix section 7.2, MuZero without Path Consistency (PC) cannot deal with environments as simple as cartpole, either with or without reanalyzation. Also, there are two interesting phenomenon about this figure, which are 1) for all curves, the rewards first increase quickly, then decrease to a relatively low level, and 2) with no reanalyzation, MuZERO without PC even seems to work better. Could the authors explain the figure more carefully? A4: As discussed in the paper, PC can enhance learning efficiency. Therefore, as shown by the learning curves in the Appendix Section 7.2, MuZero with PC learns to increase the score or reward faster than the version without PC in the early steps. Although the score curve of the PC version drops down later, it climbs back and stays at the top robustly, whereas the non-PC version drops down and stays at a low score level. What’s more, reanalyzation is one of the sources of uncertainty, and it is more helpful for the weighting mechanism in PC to take effect. The first phenomenon of the fluctuation in the score curves may be attributed to the implementation of MuZero. For the second phenomenon, the PC version of MuZero achieves a much higher score than the non-PC version as learning proceeds to be stable. Q5: Should Eq. 16 be clipped with 0 from below? A5: Thank you for pointing out this issue. We will revise it accordingly. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for your detailed repsonse; I think they addresses my concern well, though it is a pity that the proposed algortihm seems to be somewhat sensitive with respect to $\lambda$ on the Atari environment. I have one follow up question: Is the wall clock time of EfficientZero available? --- Reply to Comment 1.1.1: Comment: Thank you very much for your response. Taking the game of Breakout as an example, EfficientZero needs 230 minutes if the updating steps is 60k and the MCTS root value correction is disabled. If MCTS root value correction is allowed and the updating steps is still 60k, the spent time of EfficientZero will be increased to 275 minutes. If the updating step is 120k and the MCTS root value correction is disabled, EfficientZero will need 503 minutes. Therefore, the extra training computation due to the MCTS root value correction costs 45 minutes, which is a 19.6% (45/230) increment in the wall clock time. Because the calculations can be done in parallel, the time is not doubled. If the number of update steps is doubled, the required training time is roughly doubled.
Summary: This paper proposes a model-based RL method called GW-PCZero, which is built on EfficientZero and generalizes the path consistency (PC) constraint from board games with zero immediate rewards to environments with non-zero immediate rewards. The authors introduce a weighting average mechanism and use the mean f value of states along the path as the target for the PC constraint. Although the authors did comprehensive experiments to demonstrate that GW-PCZero slightly outperforms EfficientZero on the Atari 100k benchmarks with 26 games with less computation, they missed a published paper SCZero whose idea is essentially the same as the proposed GW-PCZero. This makes the novelty and comprehensiveness of GW-PCZero should be questioned. Meanwhile, since the author uses PC constraint on a off-policy RL algorithm EfficientZero, the off-policy issues may have significantly negative impacts on the effectiveness of PC loss. In the setting of off-policy RL, the author's claim on global constraint of PC loss should be questioned. The authors only make limited discussion and proposed a simple linear weighting trick, which causes PC loss to degenerate into the SC loss. Strengths: 1. The paper extends the path-consistency to environments with non-zero immediate rewards. 2. The paper uses significantly less computational resources. Weaknesses: The most critical issues of this paper: 1. The proposed GW-PC loss is extremely similar to a published paper 'Self-Consistent Models and Values'(Neurips 2021). The PC loss can be converted to SC loss easily as $l^{SC-residual}=\sum_{k=0}^K (\hat{r}(s_k, a_k)+\gamma \hat{v}(s_{k+1})-\hat{v}(s_k))=\sum_{k=0}^K (g(s_{k+1})+\hat{v}(s_{k+1})-g(s_k)-\hat{v}(s_k))=l^{PC}$ However, it seems that the author did not notice SCZero paper at all, no citing or taking SCZero as the most important baseline. 2. Setting path consistency on a off-policy algorithm like EfficientZero is not reasonable. For tasks with nonzero immediate rewards like Atari Games, the old state transitions collected by the old policy brought significant off-policy issue, which makes the proposed PC loss hardly be a global constraint as the authors claimed. Considering the off-policy issue and computational limitations, authors proposed a sliding window and a linear weighting trick to reduce the impact of the marginal states within the sliding window. This makes the proposed PC loss has little improvement compared to the previous SC loss, no matter in terms of fundamental ideas or mathematics. Minors: 1. The paper didn't report the median human normalized score. 2. The coefficient of PC loss is required to be adjusted according to the task. Generally, this is not an common hyperparameter in most RL settings. And the performance should not be sensitive to the coefficient of PC loss. 3. The paper didn't compare it with other data-efficient RL algorithms like IRIS and Dreamer. 4. The paper over-claims the contribution. GW-PCZero only slightly outperform the full-version EfficientZero(with 120k training steps). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can you report the median human normalized score? 2. Can you report the performance of GW-PCZero(c_\alpha)? and also the performance of Dreamer and IRIS if possible. 3. Can you report a performance with fixed coeff of PC loss? like \lambda=0.2 or 0.4 4. Can you report the performance of GW-PCZero with 120k training steps? 5. Can you provide some experiment results that shows PC loss better than SC loss within Atari 100K setting? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: The paper's novelty should be questioned, since its PC loss is basically a 'reward-sum' version of DeepMind's SC loss published in 'Self-Consistent Models and Values', Neurips 2021. The coefficient of PC loss needs to be adjusted according to the task. Generally, this is not an common hyperparameter in most RL settings. And the performance should not be sensitive to the coefficient of PC loss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions, and in the following we address the concerns you have raised. Q1: Can you report the median human normalized score? A1: The median human normalized scores are 0.399 and 0.388 for GW-PCZero and EfficientZero, respectively. The results are obtained under the same training conditions. The human normalized scores of GW-PCZero for each of the 26 games are given in decreasing order as follows. It is observed that GW-PCZero outperforms human on 11 games (with score > 1). The median is computed by the average between the $13^{th}$ (Gopher) and the $14^{th}$ (BattleZone). |Game | Score | Game | Score | Game | Score | | :-: | :-: | :-: |:-: | :-: | :-: | |Breakout | 15.57 |DemonAttack| 13.15 |Krull |5.79| |Boxing |3.46 |RoadRunner |2.14 |Assault |1.93| |Asterix | 1.76 | Jamesbond |1.38 | Pong |1.15| |UpNDown |1.06 |Qbert |1.01 |KungFuMaster |0.90| |Gopher |0.48 |BattleZone |0.32 |BankHeist |0.26| |Hero | 0.24 | MsPacman |0.19 | Alien |0.07| |Kangaroo | 0.07 | Amidar |0.05 | ChopperCmd |0.05| |Frostbite | 0.04 | Seaquest |0.02 | PrivateEye| 0.00| |Freeway | 0.00 | CrazyClimber |-0.04| Q2: Can you report the performance of GW-PCZero($c_\alpha$)? and also the performance of Dreamer and IRIS if possible. A2: The performance of GW-PCZero($c_\alpha$) is reported as follows. Experiment results indicates that proper weighting mechanism is beneficial to improve the performance, and we will conduct a more detailed and rigorous study of the weighting mechanism in the future. |Game | Score | Game | Score | Game | Score | | :-: | :-: | :-: |:-: | :-: | :-: | |Alien | 982.5 |Amidar |97.0 | Assault |1224.1| |Asterix| 22131.3 | BankHeist| 207.2 | BattleZone |16812.5| |Boxing | 53.8 | Breakout| 450.0| ChopperCmd |1150.0| |CrazyClimber| 9734.4 | DemonAttack |24074.1| Freeway |0.0| |Frostbite | 249.7 | Gopher |1286.9 | Hero |8171.3| |Jamesbond |525.0 |Kangaroo |262.5 | Krull |7782.0| |KungFuMaster| 20543.8 |MsPacman |1594.1 |Pong |19.8| |PrivateEye |96.9 | Qbert | 13651.6| RoadRunner| 16809.4| |Seaquest | 1215.6| UpNDown |12344.7| | | According to the original paper of IRIS, when the training data size is limited to 100k frames, the average score of IRIS is 1.046, which is lower than that of EfficientZero and GW-PCZero. We will add the results of IRIS to our paper in the revised version. The Dreamer in the original paper was trained with 200M frames, which is a very different setting from our paper. We will make every effort to report the result of the Dreamers under 100k training frames in the future. Q3: Can you report a performance with fixed coeff of PC loss? A3: We report the results below by fixing the coefficient at 0.35. The normalized mean score is 1.421, which is higher than EfficientZero’s 1.212 under the same training conditions. The value of 0.35 is a moderate setting for the coefficient of PC loss. In practice, we suggest to use a small coefficient for simple games, a large one for complex games. That is, the coefficient can be further adjusted for improved performance. |Game | Score | Game | Score | Game | Score | | :-: | :-: | :-: |:-: | :-: | :-: | |Alien |699.7 |Amidar |160.1 |Assault |1350.2| |Asterix | 9000.0 | BankHeist |155.6 | BattleZone |9968.8| |Boxing |27.4 | Breakout | 384.1| ChopperCmd| 1150.0| |CrazyClimber| 6665.6 | DemonAttack |12116.6| Freeway |0.0| |Frostbite | 249.7 | Gopher | 1769.4 | Hero |12646.4| |Jamesbond |357.8 |Kangaroo | 325.0 | Krull |7782.0| |KungFuMaster| 10400.0 |MsPacman |805.0 |Pong |6.7| |PrivateEye |100.0 |Qbert |7439.1 |RoadRunner| 5693.8| |Seaquest | 1006.3 | UpNDown |3932.5 ||| Q4: Can you report the performance of GW-PCZero with 120k training steps? A4: The Path Consistency (PC) constraint is able to improve the model’s learning efficiency, and make it converge fast when the sample size is limited. It should be noted in this paper that the amount of game frames collected for training is fixed at 100k, regardless of whether the training steps are 60k or 120k. Therefore, increasing the number of training steps does not always lead to performance improvement, as GW-PCZero may converge early before the 120k step. In practice, we observe certain performance improvement in several games. For example, the score is improved from 19.8 to 20.6 for Pong, and the score is improved from 262.5 to 1793.8 for Kangaroo. The performances on most of the games remain the same roughly. Moreover, increasing the number of training steps from 60k to 120k, the training time is would be doubled. Also, we need to pay attention to appropriately adjusting the decay rate of the learning rate in EfficientZero, because it is related to the number of training steps. The comparison results between GW-PCZero against the full-version EfficientZero with 120k training steps on some of the Atari games are shown as follows. GW-PCZero has already converged in some games when training steps are 60k.The mean normalized score on those 10 games is 3.86 and 2.90 for GW-PCZero and EfficientZero accordingly. We will finish the computation on 26 Atari games in the revised version of the paper. |Game|GW-PCZero|EfficientZero|Game|GW-PCZero|EfficientZero| |:-:|:-:|:-:|:-:|:-:|:-:| |Breakout|450.0|406.5|DemonAttack|24074.1|13298.0| |Jamesbond|525.0|459.4|Krull|7782.0|6047.0| |MsPacman |1594.1 |1387.0|Kangaroo|1793.8|962.0| |Pong|20.6|20.6|Hero|10818.8 |8530.1| |Amidar|97.0|101.9|PrivateEye|96.9|100.0| --- Rebuttal 2: Comment: 1. I would like to question the paper's novelty. It seems that your proposed GW-PCZero is very similar to Deepmind's' Self Consistent Models and Values' published in Neurips 2021. SCZero defines a sc-residual loss as $$l^{sc-residual}=\sum_{k=0}^K (\hat{r}(s_k, a_k)+\gamma \hat{v}(s_{k+1})-\hat{v}(s_k))$$, which is basically equal to your PC loss as $$l^{sc-residual}=\sum_{k=0}^K (g(s_{k+1})+\hat{v}(s_{k+1})-g(s_k)-\hat{v}(s_k))$$. It seems that your PC loss is just a multi-step version of SC loss, so I think the novelty of this article should be questioned unless you can prove that the effectiveness of PC loss is significantly improved compared to SC loss. 2. For A1, the median score should be a much higher value, such as EfficientZero's 1.09. This result cannot prove that you have a significant performance improvement compared to EfficientZero. 3. For A3, it seems that GW-PCZero's performance is extremely sensitive to the PC loss coeffiient. It seems that GW-PCZero cannot outperform EfficientZero with a fixed PC loss coefficient. I hope the author can release 120k training steps with fixed PC loss coeff. Considering the potential issues that may exist in the innovation of the article and the weakness of the experimental results, I will downgrade my score to 4. --- Rebuttal Comment 2.1: Comment: Thank you for your response and for providing SCZero as a reference. Response to Q1: Path consistency (PC), which is "f values on one optimal path should be identical", is rooted from the path optimality of the classical $A^*$ search algorithm (Hart et.al, 1968). __CNneim-A (Xu et al., 1987)__ relied on this optimality to use $A^*$ to make a lookahead scouting to estimate a segment on the optimal path and use the average of f-values from the root to the current state (i.e., the historical trajectory) and also one on this segment to guide $A^*$ search, and named this condition as path consistency. Subsequently, PC was suggested to cooperate with deep reinforcement learning to improve learning efficiency __(Xu, 2018)__. As shown in Equation (8) in __(Xu, 2018)__, PC was suggested to regularize the learning process by adding a weighted penalty to the loss function $$ L(\theta)=\sum_{s\in Path}[w_sL_s(\theta)+w_cL_s^c(\theta)]+|\theta|^{\gamma_r}, $$ where $L_s(\theta)$ is the reinforcement learning loss that results from the interaction with the environment, $L_s^c(\theta)$ is the consistency loss, $w_s$ and $w_c$ are adjustable hyperparameters. $L_s^c(\theta)$ is evaluated as the deviation from an estimated value of optimal path, as suggested by Equation (9) in (Xu, 2018), $$ L_s^c(\theta)=|f(s)-f^*(s)|^{\gamma} $$ where $f^*(s)$ is a moving average of $f$ values of states within a segment window $W_s\subset Path$ with $s\in W_s$. If $\gamma$ is set as 2, $L_s^c$ will be $L_2$ deviation as used in this paper. Although The idea of using PC to improve the learning efficiency of reinforcement learning algorithms was already proposed (Xu, 2018), it needs further investigation to implement and test the potential of this direction. PCZero (Zhao et.al, 2022) had applied this idea to AlphaZero, demonstrating PC's effectiveness for the first time in board games. In this paper, we generalize the idea of PC from board games to scenarios where the environment emits immediate rewards, such as Atari games. Path consistency is a __global constraint__ for all states on the optimal path. As illustrated in Equation (12), if the TD error for state $s_t$ with all states on the path is minimized to $0$, $s_t$'s PC loss $L_{PC}(s_t)$ is minimized. To facilitate practical implementation, the PC target is prepared within a selected window. But PC is still a global constraint on the entire path conceptually. SC loss is a local constraint associated with two adjacent states. For a given path, the TD error between states increases as the distance between them grows, under the SC loss constraint. Assuming the SC error of two adjacent states is $\delta$, the TD error between $s_t$ and $s_{t+k}$ might be amplified to $k\delta$, which is less reliable than PC loss. We replace the PC loss with SC loss to conduct experiments with 60k training steps in some of the Atari games. The mean normalized score of SC loss is 2.78, which is lower than PC loss's 3.76 but higher than EfficientZero's 2.21. __These experimental results indicate that SC loss can also improve learning efficiency, but PC loss is more effective than SC loss__. |Game|EfficientZero|SC loss|PC loss|Game|EfficientZero|SC loss|PC loss| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Breakout|366.7|415.4|450.0|Qbert|8120.3|4540.6|13651.6| |Assault|994.8|1114.2|1224.1|UpNDown|7592.5|14121.9|12344.7| |Asterix|17734.4|18668.8|14771.9|CrazyClimber|8059.4|9953.1|9734.4| |DemonAttack|7940.8|12743.9|24074.1|MsPacman|967.5|883.1|1594.1| |Amidar|60.6|116.2|97.0|KungFuMaster|8956.3|8853.1|20543.8| |Krull|3818.5|5147.9|7782.0|Score|2.21|2.78|__3.76__| __(Hart et.al, 1968)__ Hart, Peter E., Nils J. Nilsson, and Bertram Raphael. "A formal basis for the heuristic determination of minimum cost paths." IEEE transactions on Systems Science and Cybernetics 4.2 (1968): 100-107. __(Xu et al., 1987)__ Xu, Lei, Pingfan Yan, and Tong Chang. "Algorithm cnneim-a and its mean complexity." Proc. of 2nd international conference on computers and applications. IEEE Press, Beijing. 1987. __(Xu, 2018)__ Xu, Lei. "Deep bidirectional intelligence: AlphaZero, deep IA-search, deep IA-infer, and TPC causal learning." Applied Informatics. Vol. 5. No. 1. Berlin/Heidelberg: Springer Berlin Heidelberg, 2018. __(Zhao, et.al, 2022)__ Zhao, Dengwei, Shikui Tu, and Lei Xu. "Efficient Learning for AlphaZero via Path Consistency." International Conference on Machine Learning. PMLR, 2022. --- Reply to Comment 2.1.1: Comment: Response to Q2: The median score reported in A1 is obtained with a training step of 60k. We will try our best to reproduce the results of EfficientZero and provide the results of our GW-PCZero when the training step is 120k. Response to Q3: The value of $\lambda$ is decided based on the game's complexity. In this paper, $\lambda$ is set to $0.2$ if the game is relatively simple. $\lambda$ is set to $0.35$ if the game is complex. In the following, we present the results of several simple games, for which the size of the action space is less than 18, with $\lambda=0.2$. The mean normalized score when $\lambda=0.2$ is 3.34, much larger than 2.29 when $\lambda=0.35$. For games with similar complexity, the value of $\lambda$ is not adjusted for different games. Simple games tend to exhibit better performance with smaller $\lambda$ values, while complex games tend to demonstrate better performance with larger $\lambda$ values. |Game|$\lambda=0.2$|$\lambda=0.35$|Game|$\lambda=0.2$|$\lambda=0.35$| |:-:|:-:|:-:|:-:|:-:|:-:| |Breakout|450.0|384.1|Pong|19.8|6.7| |Qbert|13651.6|7439.1|Assault|1224.1|1350.2| |UpNDown|12344.7|3932.5|Asterix|14771.9|9000.0| |CrazyClimber|9734.4|6665.6|DemonAttack|24074.1|12116.6| |MsPacman|1594.1|805.0|Amidar|97.0|160.1| |KungFuMaster|20543.8|10400.0|Score|3.34|2.29|
Summary: This paper proposes GW-PCZero, a reinforcement learning method, extending the technique of PCZero which is currently limited to board games and lacks theoretical backing. The GW-PCZero is designed for environments with non-zero immediate rewards, such as Atari games. It maintains path consistency by regularizing value estimation with the deviation from the mean value along the path, while a new weighting mechanism minimizes scouting variance. The paper provides the first theoretical proof that a neural-guided Monte Carlo Tree Search can guarantee finding an optimal solution under path consistency. And it reaches better performance on the Atari 100k benchmark with 26 games compared to the previous SoTA EfficientZero. Strengths: 1. The paper is clear and understandable, especially in the details in Preliminary Section. 2. The paper provides a theoretical guarantee under the constraint of path consistency. 3. The performance increase in the experiments shown is laudable. Weaknesses: 1. The novelty seems limited. Compared to PCZero, it modifies the PC target through a linear weighting method, which is a little tricky and common. 2. The authors claimed a theoretical guarantee for path consistency (PC) for the first time. But the PC is the main contribution of PCZero instead of GW-PCZero, which is confusing. However, there is no guarantee for the weighting method, concerning convergence rate or optimality. 3. No ablation for different weighting methods. Only the tradeoff c is considered. For example, as illustrated in Figure 1(left), why the weights are not exponentially decayed? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As you mentioned in Definition 3.1 (L125), PC gives the constraint of the optimal path for the f values. For the optimal path, the f values keep the same in the path. But, I am wondering that "when f values match in the path, is this path optimal? especially when neural nets estimate the f values." 2. For tasks with much longer horizons, the computation cost of the path consistency can be much higher. How to reduce the corresponding cost in such cases? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. The novelty of introducing a weighting mechanism without further clarification is limited. The authors claim that the weighting mechanism can mitigate the uncertainty/variance in L234, but no experiments or theorems are provided regarding the uncertainty/variance. 2. The experiments and analysis of the weighting mechanism are not enough, which is the main contribution of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions and would like to address the questions one by one as follows. Q1: The novelty seems limited. The authors claimed a theoretical guarantee for path consistency (PC) for the first time. But the PC is the main contribution of PCZero instead of GW-PCZero, which is confusing. A1: The primary contribution of this paper is to extend the concept of Path Consistency (PC) to a broader range of application scenarios with non-zero immediate reward, such as Atari games. In the literature, PC was considered by PCZero soly on board games, for which the immediate rewards are always zero. What’s more, PCZero only demonstrated the effectiveness of PC through experiments, and lacked theoretical guarantees. In this paper, we provide a theoretical foundation for PC for the first time that the neural-guided MCTS is guaranteed to find the optimal solution under the constraint of PC. Furthermore, we present a weighting mechanism into the calculation of the PC target to reduce the variance caused by scouting’s uncertainty and improve the performance. Q2: As you mentioned in Definition 3.1 (L125), PC gives the constraint of the optimal path for the f values. For the optimal path, the f values keep the same in the path. But I am wondering that "when f values match in the path, is this path optimal? especially when neural nets estimate the f values." A2: If the f values are estimated quite accurately, the path with the same f-value is the optimal path. The reason is that the root node is definitely on the optimal path, and the f value of the root node is the global optimal solution with highest reward. Therefore, the nodes which have the same f value as the root node also have the highest reward, and they constitute the optimal path. However, if the quality of the estimated f-values cannot be guaranteed, the above statement may not hold true. For example, one may construct a specific function that outputs the same f values for a randomly picked path which is unlikely to be optimal. In practice, such case should be seldom encountered, because the parameters of the neural network to predict the f value are usually randomly initialized. Q3: For tasks with much longer horizons, the computation cost of the path consistency can be much higher. How to reduce the corresponding cost in such cases? A3: The computational complexity of PC mainly lies in preparing the learning target, which computes the mean f value for all states within a selected window. If the task has much longer horizons, we can still compute the PC target based on the predefined k nearest neighbor nodes. The number k is a given constant, and it is irrelevant to the length of the horizons. According to our empirical experience, k=5 is good enough. Q4: The authors claim that the weighting mechanism can mitigate the uncertainty/variance in L234, but no experiments or theorems are provided regarding the uncertainty/variance. A4: For a sampled batch ${s_0,s_1,s_2,s_3}$, the PC target is calculated as $$\bar{f}=\frac{f(s_0 )+w_1 f(s_1 )+w_2 f(s_2 )+w_3 f(s_3 )}{1+w_1+w_2+w_3}$$ As mentioned in the paper, PC requires that f values of states along any optimal path in a search graph should be identical with $s_0$, and the probability that $s_i$ and $s_0$ are not in the same optimal path grows as i increases. If we assume the f value follows a normal distribution, $f(s_i)$ has the same mean value but the variance grows as i increases, because the probability of $f(s_i)$ being farther away from the mean grows. When a smaller weight $w_i$ is given to $f(s_i)$ with a larger index i, the variance of the estimated $\bar{f}$ will be reduced. From this perspective, the weighting mechanism is a reasonable way to mitigate the variance. Q5: More ablation study on the weighting mechanism. A5: Thanks for your suggestion. We provide experimental results of the exponential weighting approach, for which the weight is calculated as $\exp⁡\\{-i×0.1\\}$. Due to the limited time of the rebuttal period, we report the results on some Atari games as follows. Notice that the performance is improved on seven out of twelvegames if replacing the linear weighting with the exponential weighting method. It deserves more investigations on the weighting mechanism in the future. |Game | Linear | Exp | Game | Linear | Exp | | :-: | :-: | :-: |:-: | :-: | :-: | |Breakout | 450.0| 475.7 | Pong |19.8 | 20.2| |Qbert |13651.6| 11737.5 | Assault |1224.1 |1224.9| |Asterix | 14771.9| 15750.0| CrazyClimber| 9734.4| 6718.8| |MsPacman |1594.1 |1319.7 |Amidar | 97.0 |186.2| |KungFuMaster| 20543.8| 24025.0| Krull | 7782.0| 6131.0| |Alien |699.7 |627.8| Frostbite | 249.7| 258.1| --- Rebuttal 2: Comment: Thank you for your reply and experiments. But considering the novelty (Reviewer s6tP also mentioned SCZero, which is quite similar to this work), I will keep my score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper extends PCZero to more general games whre the environment emits non-zero immediate rewards and proposes Generalized Weighted PCZero (GW-PCZero). GW-PCZero is built on EfficientZero with a generalized PC constraint. Specifically, GW-PCZero add an additional value consistence loss alone the sampled path, i.e., $L_{PC}(\theta) = \left(v(s_t;\theta)-\frac{1}{l+1} \sum_{i=0}^l\left[\sum_{j=1}^i r_{t+j}+v\left(s_{t+i} ; \theta\right)\right]\right)^2$, the MSE loss between $v(s_t;\theta)$ and the average value of {$0$-step td-target, $1$-step td-target, ..., $l$-step td-target}. To reduce the bias and variance of the PC targets when training with off-policy data, GW-PCZero devise a weighting mechanism to give larger discounts to farther states, which is very similar to the idea of `td-lambda`. So, this paper can be considered as adding an additional `td-lambda` style value loss to EfficientZero. Experiments on the Atari-100k benchmark validate the superiority of GW-PCZero over EfficientZero. Strengths: * The paper is written clearly. * The proposed GW-PCZero becomes a new SOTA on the Atari-100k benchmark, which achieves a higher human normalized score than EfficientZero but with much less computational cost. Weaknesses: * 1. The path consistency exists not only on the `optimal` path but on any `on-policy` path. * 2. Although the paper has done a lot of theoretical analysis, the proposed method can be considered as adding an additional `td-lambda` style value loss to EfficientZero. Therefore, the differences between the new proposed value loss functions in Eq. (15) and (17) with other typical value loss functions, e.g., td-lambda, v-trace, etc, should be discussed. * 3. For EfficientZero, the policy target reanalyze process and the value target reanalyze process are not necessarily separated into 2 independent passes. * We can sample a slightly longer sub-trajectory $L_b=\\{s_t, s_{t+1}, \cdots, s_{t+H}\\}$ for each item in a batch (but keep the total transition number in a batch the same), where $H>l$ and use a single batch of MCTS to get $\pi^{MCTS}$ and $v^{MCTS}$ for all states in $L_b$. To compute $z_t, \ldots, z_{t+H-l}$, we follow Eq. (9) in this paper but replacing $v$ with $v^{MCTS}$. For $z_{t+H-l-1}, \ldots, z_{t+H}$, we set the $n$-step td targets with smaller $n \in \\{l-1,\ldots, 0\\}$, e.g., we set $z_{t+H}=v^{MCTS}(s_{t+H})$. * Therefore, the computational cost (the number of MCTS runs) of EfficientZero is not necessarily much greater than GW-PCZero. * Minors: * $v(s_{t+1};\theta)$ (line 146) should be $v(s_{t+l};\theta)$. * $L_p$ (line 221) should be $L_b$. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * What's the performance of GW-PCZero if setting the updating steps to 120k? * What's the performance of EfficientZero if directly using Eq. (9) as the value target (do not replace $v(s_{t+l};\theta)$ with $v^{MCTS}(s_{t+l};\theta)$? * For $L_b=\\{s_t, s_{t+1}, \cdots, s_{t+H}\\}$, only $s_t$ is considered to be constrained with PC loss in real implementation because the sampled batch $L_p$ is too short to deal with the subsequent states in the same way (line 220-221). Why not directly using Eq. (12) as the loss function (which can utilizing all states in $L_b$)? * What if removing the value loss (of EfficientZero) in GW-PCZero and only keeping the PC-constant loss shown in Eq. (12)? * What if removing the PC-constant loss of GW-PCZero and replacing the value loss (of EfficientZero) with a `td-lambda` style or a `v-trace` style value loss? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable suggestions and would like to take this opportunity to address the raised issues. Q1: What's the performance of GW-PCZero if setting the updating steps to 120k? A1: The Path Consistency (PC) constraint is able to improve the model’s learning efficiency, and make it converge fast. The amount of game frames collected for training is 100k, regardless of whether the training steps are 60k or 120k. Increasing the number of training steps does not always lead to performance improvement. In practice, we observe certain performance improvement in several games. For example, the score is improved from 19.8 to 20.6 for Pong, and from 262.5 to 1793.8 for Kangaroo. Increasing the number of training steps 120k doubles the training time. The comparison results between GW-PCZero against the full-version EfficientZero with 120k training steps are shown as follows. GW-PCZero has converged in some games after 60k updates.The mean normalized score on those 10 games is 3.86 and 2.90 for GW-PCZero and EfficientZero accordingly. Game|GW-PCZero|EfficientZero|Game|GW-PCZero|EfficientZero -|-|-|-|-|- Breakout|450.0|406.5|DemonAttack|24074.1|13298.0 Jamesbond|525.0|459.4|Krull|7782.0|6047.0 MsPacman |1594.1 |1387.0|Kangaroo|1793.8|962.0 Pong|20.6|20.6|Hero|10818.8 |8530.1 Amidar|97.0|101.9|PrivateEye|96.9|100.0 Q2: What's the performance of EfficientZero if directly using Eq. (9) as the value target? A2: The comparison results with or without MCTS in Eq. (9) is shown in Table (9) in the appendix of EfficientZero. In this paper, the results of EfficientZero in Table 2 were obtained by adopting Eq (9) as the value target. We also add an experiment to implement EfficientZero to use MCTS root value correction. 5 games have performance improvement, while the other 5 are not. Game|With MCTS |Without MCTS|Game|With MCTS|Without MCTS -|-|-|-|-|- Alien|638.8|850.6|Amidar|80.1|60.6 Assault| 1352.5|994.8| Asterix|19356.3|17734.4 BankHeist|293.8|276.9|BattleZone|13718.8|15875.0 Boxing|43.3|28.2| Breakout |357.2|366.7 ChopperCmd|631.3|818.8|CrazyClimber|7115.6|8059.4 Q3: Why not directly using Eq. (12) as the loss function? A3: In practice, it is usually not a good choice to use Eq. (12) that require all nodes along the entire path to be available. First, the entire path may be long with a large number of nodes, and the computation for the PC target would be of high complexity. Secondly, obtaining complete and terminated paths might not be feasible in situations like Atari. Third, as mentioned in the paper, using the entire path to prepare the PC target may be unreliable. In practice, we suggest to select a certain number of the neighboring nodes through a weighting mechanism. This idea of local computation of Eq. (12) is left for future. Q4: What if removing the value loss and only keeping the PC loss? A5: The results of the PC-soly version are worse than the version where both loss functions are considered. Same as the PCZero paper, value loss and PC loss cannot be replaced withby each other, and the better performance will be achieved if both are adopted. Game|Random Player|PC-soly|PC + Value -|-|-|- Qbert|163.9|2138.3|13651.6 Assault|222.4|229.7|1224.1 Asterix|210.0|2675.0|14771.9 DemonAttack|152.1|3947.0|24074.1 MsPacman|307.3|1186.3|1594.1 Q5: The differences between the PC loss with other typical value loss functions should be discussed. What if removing the PC loss and replacing the value loss with a td-lambda style or a v-trace style value loss? A5: The path consistency loss and other typical value loss such as TD-lambda share many similarities. The PC target in Eq. (17) is to combine all i-step returns through a weighting mechanism, and it contains TD-lambda as a special case if we set the weights to $(1-\lambda)\lambda^{i-1}$. Other weighting methods can be considered for PC. PC loss and TD-lambda have some differences. In general, learning with the PC is more flexible than TD-lambda, because PC can be implemented in various mathematical forms. While preparing the learning target for PC, both states after $s_t$ and states before $s_t$ can be considered, and TD-lambda only considers the TD relationships with the states after $s_t$. If removing the PC loss of GW-PCZero and replacing the value loss with a td-lambda loss, we report the scores of several Atari games as and the performance is poor. If we keep the value loss and replace the PC loss with TD loss, the algorithm still works. TD-lambda is a special case of PC loss, and can be used as a substitute for PC loss. Both PC loss and TD-lambda cannot replace the role of the value loss. Game|Without value loss|With value loss|Game|Without value loss|With value loss -|-|-|-|-|- Breakout|3.22|415.4|Asterix |275.0| 18668.0 MsPacman|606.9|883.1|Amidar|2.0|116.2 Krull|1982.0|5147.9|Alien|577.8|916.9 Q6: The computational cost of EfficientZero is not necessarily much greater than GW-PCZero. A6: Theoretically, the policy target reanalyze process and the value target reanalyze process can be done simultaneously if H is much larger than L, i.e., H >> L, and the reliability of most value function estimations is comparable to that by performing two MCTS runs. However, in practical situations, H is usually small, because a large H indicates long sub-trajectories and it tends to make the batch data violate the independence of training samples. For both MuZero and EfficientZero, H was 5. L was also 5 for MuZero. A dynamic L was adopted in EfficientZero, which is 4 or 5 in most cases. If H=L or H is slightly larger than L, a significant proportion of the value target is still determined by the last state, imposing higher requirements on the reliability of the value provided by MCTS. Therefore, EfficientZero need to employ two times of MCTS to ensure its performance. What’s more, PC can improve the learning efficiency greatly, reducing the computational resources consumption to half with 60k training steps. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your reply and experiments. After reading the comments of the other reviewers, I still have some concerns. (1) The authors did not replay to Weaknesses 1, which is the main concern: ``` the path consistency exists not only on the optimal path but on any on-policy path``` (Reviewer ```15wJ``` and ```s6tP``` also have similar questions). So, whether adding the path consistency constraint throughout the whole training cycle will make training less effective (especially when using too much off-policy data)? Whether the model is more likely to fall into suboptimal solutions? * From the authors' response (A2) to Reviewer ```15wJ```, I found the authors hold the viewpoint that when f values match in the path, the path is optimal, which I think is not correct. (2) After reading the comments of the other reviewers, I agree that the novelty may be limited compared to PCZero. So, I keep the score currently.
null
null
null
null
null
null